Aug 13 07:05:17.076536 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:05:17.076559 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:05:17.076571 kernel: BIOS-provided physical RAM map: Aug 13 07:05:17.076578 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:05:17.076584 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:05:17.076590 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:05:17.076597 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 07:05:17.076604 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 07:05:17.076610 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:05:17.076619 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 07:05:17.076625 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:05:17.076631 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:05:17.076641 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 07:05:17.076648 kernel: NX (Execute Disable) protection: active Aug 13 07:05:17.076656 kernel: APIC: Static calls initialized Aug 13 07:05:17.076667 kernel: SMBIOS 2.8 present. Aug 13 07:05:17.076674 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 07:05:17.076681 kernel: Hypervisor detected: KVM Aug 13 07:05:17.076688 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:05:17.076695 kernel: kvm-clock: using sched offset of 2848356788 cycles Aug 13 07:05:17.076702 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:05:17.076709 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:05:17.076716 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:05:17.076724 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:05:17.076733 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 07:05:17.076741 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:05:17.076748 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:05:17.076755 kernel: Using GB pages for direct mapping Aug 13 07:05:17.076762 kernel: ACPI: Early table checksum verification disabled Aug 13 07:05:17.076769 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 07:05:17.076776 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076783 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076790 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076799 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 07:05:17.076806 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076813 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076820 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076827 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:05:17.076834 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 07:05:17.076841 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 07:05:17.076852 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 07:05:17.076861 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 07:05:17.076868 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 07:05:17.076876 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 07:05:17.076883 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 07:05:17.076890 kernel: No NUMA configuration found Aug 13 07:05:17.076897 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 07:05:17.076907 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 07:05:17.076915 kernel: Zone ranges: Aug 13 07:05:17.076922 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:05:17.076929 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 07:05:17.076936 kernel: Normal empty Aug 13 07:05:17.076943 kernel: Movable zone start for each node Aug 13 07:05:17.076951 kernel: Early memory node ranges Aug 13 07:05:17.076958 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:05:17.076965 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 07:05:17.076972 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 07:05:17.076982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:05:17.076991 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:05:17.076999 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 07:05:17.077006 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:05:17.077013 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:05:17.077021 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:05:17.077028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:05:17.077035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:05:17.077042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:05:17.077052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:05:17.077059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:05:17.077066 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:05:17.077074 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:05:17.077081 kernel: TSC deadline timer available Aug 13 07:05:17.077088 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:05:17.077095 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:05:17.077103 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:05:17.077112 kernel: kvm-guest: setup PV sched yield Aug 13 07:05:17.077121 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 07:05:17.077129 kernel: Booting paravirtualized kernel on KVM Aug 13 07:05:17.077136 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:05:17.077143 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:05:17.077151 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:05:17.077158 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:05:17.077165 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:05:17.077172 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:05:17.077179 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:05:17.077190 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:05:17.077198 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:05:17.077205 kernel: random: crng init done Aug 13 07:05:17.077213 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:05:17.077220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:05:17.077227 kernel: Fallback order for Node 0: 0 Aug 13 07:05:17.077234 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 07:05:17.077242 kernel: Policy zone: DMA32 Aug 13 07:05:17.077251 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:05:17.077259 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136904K reserved, 0K cma-reserved) Aug 13 07:05:17.077266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:05:17.077274 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:05:17.077281 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:05:17.077288 kernel: Dynamic Preempt: voluntary Aug 13 07:05:17.077295 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:05:17.077303 kernel: rcu: RCU event tracing is enabled. Aug 13 07:05:17.077311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:05:17.077320 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:05:17.077328 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:05:17.077335 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:05:17.077342 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:05:17.077352 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:05:17.077359 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:05:17.077366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:05:17.077373 kernel: Console: colour VGA+ 80x25 Aug 13 07:05:17.077381 kernel: printk: console [ttyS0] enabled Aug 13 07:05:17.077390 kernel: ACPI: Core revision 20230628 Aug 13 07:05:17.077398 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:05:17.077405 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:05:17.077412 kernel: x2apic enabled Aug 13 07:05:17.077419 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:05:17.077430 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:05:17.077437 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:05:17.077445 kernel: kvm-guest: setup PV IPIs Aug 13 07:05:17.077462 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:05:17.077469 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:05:17.077477 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:05:17.077485 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:05:17.077512 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:05:17.077520 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:05:17.077528 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:05:17.077535 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:05:17.077543 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:05:17.077553 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:05:17.077561 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:05:17.077571 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:05:17.077579 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:05:17.077587 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:05:17.077595 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:05:17.077602 kernel: x86/bugs: return thunk changed Aug 13 07:05:17.077610 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:05:17.077620 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:05:17.077628 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:05:17.077635 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:05:17.077643 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:05:17.077650 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:05:17.077658 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:05:17.077666 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:05:17.077673 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:05:17.077681 kernel: landlock: Up and running. Aug 13 07:05:17.077691 kernel: SELinux: Initializing. Aug 13 07:05:17.077698 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:05:17.077706 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:05:17.077714 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:05:17.077729 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:05:17.077743 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:05:17.077750 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:05:17.077764 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:05:17.077782 kernel: ... version: 0 Aug 13 07:05:17.077793 kernel: ... bit width: 48 Aug 13 07:05:17.077801 kernel: ... generic registers: 6 Aug 13 07:05:17.077808 kernel: ... value mask: 0000ffffffffffff Aug 13 07:05:17.077816 kernel: ... max period: 00007fffffffffff Aug 13 07:05:17.077823 kernel: ... fixed-purpose events: 0 Aug 13 07:05:17.077831 kernel: ... event mask: 000000000000003f Aug 13 07:05:17.077838 kernel: signal: max sigframe size: 1776 Aug 13 07:05:17.077846 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:05:17.077854 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:05:17.077864 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:05:17.077872 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:05:17.077879 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:05:17.077887 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:05:17.077894 kernel: smpboot: Max logical packages: 1 Aug 13 07:05:17.077902 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:05:17.077909 kernel: devtmpfs: initialized Aug 13 07:05:17.077917 kernel: x86/mm: Memory block size: 128MB Aug 13 07:05:17.077925 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:05:17.077935 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:05:17.077942 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:05:17.077950 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:05:17.077957 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:05:17.077965 kernel: audit: type=2000 audit(1755068716.262:1): state=initialized audit_enabled=0 res=1 Aug 13 07:05:17.077972 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:05:17.077980 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:05:17.077988 kernel: cpuidle: using governor menu Aug 13 07:05:17.077995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:05:17.078005 kernel: dca service started, version 1.12.1 Aug 13 07:05:17.078013 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:05:17.078020 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:05:17.078028 kernel: PCI: Using configuration type 1 for base access Aug 13 07:05:17.078036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:05:17.078043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:05:17.078051 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:05:17.078061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:05:17.078068 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:05:17.078078 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:05:17.078086 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:05:17.078093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:05:17.078101 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:05:17.078109 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:05:17.078116 kernel: ACPI: Interpreter enabled Aug 13 07:05:17.078124 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:05:17.078131 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:05:17.078139 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:05:17.078149 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:05:17.078156 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:05:17.078164 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:05:17.078385 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:05:17.078554 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:05:17.078691 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:05:17.078702 kernel: PCI host bridge to bus 0000:00 Aug 13 07:05:17.078846 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:05:17.078969 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:05:17.079087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:05:17.079203 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:05:17.079319 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:05:17.079435 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 07:05:17.079581 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:05:17.079753 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:05:17.079995 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:05:17.080152 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 07:05:17.080306 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 07:05:17.080434 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 07:05:17.080588 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:05:17.080740 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:05:17.080877 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 07:05:17.081007 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 07:05:17.081135 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 07:05:17.081418 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:05:17.081614 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:05:17.081813 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 07:05:17.081952 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 07:05:17.082105 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:05:17.082237 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 07:05:17.082364 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 07:05:17.082521 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 07:05:17.082654 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 07:05:17.082789 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:05:17.083107 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:05:17.083330 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:05:17.083485 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 07:05:17.084042 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 07:05:17.084194 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:05:17.084416 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 07:05:17.084443 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:05:17.084465 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:05:17.084487 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:05:17.084575 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:05:17.084583 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:05:17.084591 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:05:17.084598 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:05:17.084606 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:05:17.084614 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:05:17.084621 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:05:17.084633 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:05:17.084640 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:05:17.084648 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:05:17.084656 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:05:17.084663 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:05:17.084671 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:05:17.084679 kernel: iommu: Default domain type: Translated Aug 13 07:05:17.084686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:05:17.084694 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:05:17.084704 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:05:17.084711 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:05:17.084719 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 07:05:17.084885 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:05:17.085048 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:05:17.085193 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:05:17.085205 kernel: vgaarb: loaded Aug 13 07:05:17.085213 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:05:17.085228 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:05:17.085236 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:05:17.085244 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:05:17.085251 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:05:17.085259 kernel: pnp: PnP ACPI init Aug 13 07:05:17.085420 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:05:17.085432 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:05:17.085440 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:05:17.085451 kernel: NET: Registered PF_INET protocol family Aug 13 07:05:17.085459 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:05:17.085467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:05:17.085475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:05:17.085482 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:05:17.085566 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:05:17.085575 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:05:17.085582 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:05:17.085590 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:05:17.085601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:05:17.085609 kernel: NET: Registered PF_XDP protocol family Aug 13 07:05:17.085735 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:05:17.085850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:05:17.086004 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:05:17.086121 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:05:17.086236 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:05:17.086350 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 07:05:17.086365 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:05:17.086375 kernel: Initialise system trusted keyrings Aug 13 07:05:17.086386 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:05:17.086396 kernel: Key type asymmetric registered Aug 13 07:05:17.086407 kernel: Asymmetric key parser 'x509' registered Aug 13 07:05:17.086418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:05:17.086428 kernel: io scheduler mq-deadline registered Aug 13 07:05:17.086436 kernel: io scheduler kyber registered Aug 13 07:05:17.086444 kernel: io scheduler bfq registered Aug 13 07:05:17.086451 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:05:17.086463 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:05:17.086471 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:05:17.086478 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:05:17.086486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:05:17.086513 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:05:17.086534 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:05:17.086542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:05:17.086550 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:05:17.086706 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:05:17.086723 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:05:17.086846 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:05:17.086966 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:05:16 UTC (1755068716) Aug 13 07:05:17.087084 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:05:17.087095 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:05:17.087102 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:05:17.087110 kernel: Segment Routing with IPv6 Aug 13 07:05:17.087118 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:05:17.087129 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:05:17.087137 kernel: Key type dns_resolver registered Aug 13 07:05:17.087144 kernel: IPI shorthand broadcast: enabled Aug 13 07:05:17.087152 kernel: sched_clock: Marking stable (901003662, 144184140)->(1075151549, -29963747) Aug 13 07:05:17.087160 kernel: registered taskstats version 1 Aug 13 07:05:17.087168 kernel: Loading compiled-in X.509 certificates Aug 13 07:05:17.087175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:05:17.087183 kernel: Key type .fscrypt registered Aug 13 07:05:17.087191 kernel: Key type fscrypt-provisioning registered Aug 13 07:05:17.087201 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:05:17.087209 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:05:17.087216 kernel: ima: No architecture policies found Aug 13 07:05:17.087224 kernel: clk: Disabling unused clocks Aug 13 07:05:17.087232 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:05:17.087239 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:05:17.087247 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:05:17.087255 kernel: Run /init as init process Aug 13 07:05:17.087264 kernel: with arguments: Aug 13 07:05:17.087272 kernel: /init Aug 13 07:05:17.087280 kernel: with environment: Aug 13 07:05:17.087287 kernel: HOME=/ Aug 13 07:05:17.087295 kernel: TERM=linux Aug 13 07:05:17.087302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:05:17.087312 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:05:17.087322 systemd[1]: Detected virtualization kvm. Aug 13 07:05:17.087333 systemd[1]: Detected architecture x86-64. Aug 13 07:05:17.087341 systemd[1]: Running in initrd. Aug 13 07:05:17.087349 systemd[1]: No hostname configured, using default hostname. Aug 13 07:05:17.087357 systemd[1]: Hostname set to . Aug 13 07:05:17.087365 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:05:17.087373 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:05:17.087381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:05:17.087390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:05:17.087401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:05:17.087410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:05:17.087430 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:05:17.087441 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:05:17.087451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:05:17.087462 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:05:17.087470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:05:17.087479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:05:17.087546 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:05:17.087558 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:05:17.087566 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:05:17.087574 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:05:17.087583 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:05:17.087596 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:05:17.087604 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:05:17.087613 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:05:17.087621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:05:17.087630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:05:17.087638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:05:17.087646 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:05:17.087655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:05:17.087663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:05:17.087674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:05:17.087682 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:05:17.087691 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:05:17.087699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:05:17.087707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:05:17.087716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:05:17.087724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:05:17.087732 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:05:17.087744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:05:17.087774 systemd-journald[191]: Collecting audit messages is disabled. Aug 13 07:05:17.087795 systemd-journald[191]: Journal started Aug 13 07:05:17.087815 systemd-journald[191]: Runtime Journal (/run/log/journal/11284cd5ed44470ea09011b139c73593) is 6.0M, max 48.4M, 42.3M free. Aug 13 07:05:17.081861 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:05:17.119279 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:05:17.119301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:05:17.119313 kernel: Bridge firewalling registered Aug 13 07:05:17.113979 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:05:17.122799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:05:17.125151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:05:17.127550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:05:17.144698 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:05:17.147809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:05:17.150484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:05:17.154550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:05:17.166725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:05:17.169302 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:05:17.169622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:05:17.174427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:05:17.181015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:05:17.183906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:05:17.192721 dracut-cmdline[229]: dracut-dracut-053 Aug 13 07:05:17.196084 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:05:17.238959 systemd-resolved[233]: Positive Trust Anchors: Aug 13 07:05:17.238977 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:05:17.239007 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:05:17.241887 systemd-resolved[233]: Defaulting to hostname 'linux'. Aug 13 07:05:17.243451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:05:17.248724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:05:17.311530 kernel: SCSI subsystem initialized Aug 13 07:05:17.320519 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:05:17.330525 kernel: iscsi: registered transport (tcp) Aug 13 07:05:17.352524 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:05:17.352551 kernel: QLogic iSCSI HBA Driver Aug 13 07:05:17.405390 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:05:17.412723 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:05:17.436536 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:05:17.436564 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:05:17.437547 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:05:17.479517 kernel: raid6: avx2x4 gen() 30361 MB/s Aug 13 07:05:17.496518 kernel: raid6: avx2x2 gen() 31333 MB/s Aug 13 07:05:17.513551 kernel: raid6: avx2x1 gen() 25653 MB/s Aug 13 07:05:17.513584 kernel: raid6: using algorithm avx2x2 gen() 31333 MB/s Aug 13 07:05:17.531562 kernel: raid6: .... xor() 19684 MB/s, rmw enabled Aug 13 07:05:17.531621 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:05:17.552524 kernel: xor: automatically using best checksumming function avx Aug 13 07:05:17.707553 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:05:17.722203 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:05:17.735782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:05:17.748979 systemd-udevd[414]: Using default interface naming scheme 'v255'. Aug 13 07:05:17.753790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:05:17.764651 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:05:17.779398 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 13 07:05:17.817445 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:05:17.825736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:05:17.893794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:05:17.904338 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:05:17.918124 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:05:17.918888 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:05:17.921851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:05:17.926663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:05:17.931579 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:05:17.938754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:05:17.942963 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:05:17.947546 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:05:17.951028 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:05:17.955002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:05:17.961527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:05:17.961563 kernel: GPT:9289727 != 19775487 Aug 13 07:05:17.961581 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:05:17.961592 kernel: GPT:9289727 != 19775487 Aug 13 07:05:17.961602 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:05:17.961613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:05:17.960509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:05:17.963960 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:05:17.967011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:05:17.968623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:05:17.973171 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:05:17.973196 kernel: libata version 3.00 loaded. Aug 13 07:05:17.973214 kernel: AES CTR mode by8 optimization enabled Aug 13 07:05:17.969948 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:05:17.982900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:05:17.989762 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:05:17.990008 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:05:17.990033 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:05:17.990218 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:05:18.001557 kernel: scsi host0: ahci Aug 13 07:05:18.001861 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Aug 13 07:05:18.001878 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Aug 13 07:05:18.001893 kernel: scsi host1: ahci Aug 13 07:05:18.006956 kernel: scsi host2: ahci Aug 13 07:05:18.013172 kernel: scsi host3: ahci Aug 13 07:05:18.018519 kernel: scsi host4: ahci Aug 13 07:05:18.022068 kernel: scsi host5: ahci Aug 13 07:05:18.022271 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 07:05:18.022284 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 07:05:18.022294 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 07:05:18.022305 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 07:05:18.022321 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 07:05:18.022332 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 07:05:18.021130 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:05:18.031101 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:05:18.032391 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:05:18.039236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:05:18.074665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:05:18.075040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:05:18.089665 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:05:18.091572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:05:18.101470 disk-uuid[567]: Primary Header is updated. Aug 13 07:05:18.101470 disk-uuid[567]: Secondary Entries is updated. Aug 13 07:05:18.101470 disk-uuid[567]: Secondary Header is updated. Aug 13 07:05:18.105166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:05:18.108558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:05:18.111114 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:05:18.114743 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:05:18.333539 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:05:18.333634 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:05:18.334526 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:05:18.335535 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:05:18.335610 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:05:18.336029 kernel: ata3.00: applying bridge limits Aug 13 07:05:18.337522 kernel: ata3.00: configured for UDMA/100 Aug 13 07:05:18.337578 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:05:18.342526 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:05:18.342548 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:05:18.378528 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:05:18.378782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:05:18.392513 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:05:19.112508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:05:19.113049 disk-uuid[572]: The operation has completed successfully. Aug 13 07:05:19.142137 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:05:19.142286 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:05:19.164712 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:05:19.168121 sh[595]: Success Aug 13 07:05:19.180517 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:05:19.215318 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:05:19.227115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:05:19.229733 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:05:19.241602 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:05:19.241633 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:05:19.241644 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:05:19.242594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:05:19.243882 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:05:19.248145 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:05:19.249648 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:05:19.250480 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:05:19.253204 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:05:19.265664 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:05:19.265689 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:05:19.265703 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:05:19.268525 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:05:19.277716 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:05:19.279404 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:05:19.287877 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:05:19.293642 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:05:19.351624 ignition[690]: Ignition 2.19.0 Aug 13 07:05:19.351641 ignition[690]: Stage: fetch-offline Aug 13 07:05:19.351686 ignition[690]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:05:19.351699 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:05:19.351818 ignition[690]: parsed url from cmdline: "" Aug 13 07:05:19.351823 ignition[690]: no config URL provided Aug 13 07:05:19.351830 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:05:19.351847 ignition[690]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:05:19.351881 ignition[690]: op(1): [started] loading QEMU firmware config module Aug 13 07:05:19.351888 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:05:19.360514 ignition[690]: op(1): [finished] loading QEMU firmware config module Aug 13 07:05:19.380282 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:05:19.390641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:05:19.402237 ignition[690]: parsing config with SHA512: 64d69ceccb107abb48ef7915a09c149dc7c6e16e99b44f556f6d400942771340d4b096bbc77cbd064cb0330251810a1a022f6d2742c07a591b8a7c69c0aa308a Aug 13 07:05:19.405625 unknown[690]: fetched base config from "system" Aug 13 07:05:19.405637 unknown[690]: fetched user config from "qemu" Aug 13 07:05:19.405978 ignition[690]: fetch-offline: fetch-offline passed Aug 13 07:05:19.406038 ignition[690]: Ignition finished successfully Aug 13 07:05:19.407972 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:05:19.414406 systemd-networkd[783]: lo: Link UP Aug 13 07:05:19.414415 systemd-networkd[783]: lo: Gained carrier Aug 13 07:05:19.416179 systemd-networkd[783]: Enumeration completed Aug 13 07:05:19.416296 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:05:19.416728 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:05:19.416733 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:05:19.418345 systemd[1]: Reached target network.target - Network. Aug 13 07:05:19.418944 systemd-networkd[783]: eth0: Link UP Aug 13 07:05:19.418948 systemd-networkd[783]: eth0: Gained carrier Aug 13 07:05:19.418956 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:05:19.420259 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:05:19.433676 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:05:19.446595 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:05:19.450074 ignition[786]: Ignition 2.19.0 Aug 13 07:05:19.450085 ignition[786]: Stage: kargs Aug 13 07:05:19.450240 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:05:19.450252 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:05:19.451133 ignition[786]: kargs: kargs passed Aug 13 07:05:19.451178 ignition[786]: Ignition finished successfully Aug 13 07:05:19.455416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:05:19.465632 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:05:19.476839 ignition[794]: Ignition 2.19.0 Aug 13 07:05:19.476852 ignition[794]: Stage: disks Aug 13 07:05:19.477055 ignition[794]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:05:19.477070 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:05:19.480184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:05:19.478138 ignition[794]: disks: disks passed Aug 13 07:05:19.481912 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:05:19.478200 ignition[794]: Ignition finished successfully Aug 13 07:05:19.483741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:05:19.484941 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:05:19.486479 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:05:19.487558 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:05:19.499634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:05:19.512708 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:05:19.518109 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:05:19.524599 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:05:19.608533 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:05:19.608948 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:05:19.610368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:05:19.622585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:05:19.624554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:05:19.625825 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:05:19.625863 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:05:19.634284 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Aug 13 07:05:19.634309 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:05:19.625884 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:05:19.640880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:05:19.640904 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:05:19.640915 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:05:19.632503 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:05:19.636903 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:05:19.642939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:05:19.673032 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:05:19.677937 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:05:19.681868 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:05:19.685892 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:05:19.767060 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:05:19.775602 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:05:19.778796 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:05:19.784517 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:05:19.804240 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:05:19.809915 ignition[926]: INFO : Ignition 2.19.0 Aug 13 07:05:19.809915 ignition[926]: INFO : Stage: mount Aug 13 07:05:19.811667 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:05:19.811667 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:05:19.811667 ignition[926]: INFO : mount: mount passed Aug 13 07:05:19.811667 ignition[926]: INFO : Ignition finished successfully Aug 13 07:05:19.817472 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:05:19.827596 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:05:20.241124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:05:20.253641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:05:20.261095 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Aug 13 07:05:20.261135 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:05:20.261147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:05:20.262526 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:05:20.265523 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:05:20.266425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:05:20.295010 ignition[958]: INFO : Ignition 2.19.0 Aug 13 07:05:20.295010 ignition[958]: INFO : Stage: files Aug 13 07:05:20.296839 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:05:20.296839 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:05:20.296839 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:05:20.300904 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:05:20.300904 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:05:20.305780 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:05:20.307192 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:05:20.308677 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:05:20.307752 unknown[958]: wrote ssh authorized keys file for user: core Aug 13 07:05:20.311159 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:05:20.311159 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 07:05:20.347742 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:05:20.478245 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:05:20.478245 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:05:20.481912 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:05:20.575793 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:05:20.704015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:05:20.704015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:05:20.708432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:05:20.766701 systemd-networkd[783]: eth0: Gained IPv6LL Aug 13 07:05:20.950428 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:05:21.751953 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:05:21.751953 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 07:05:21.756601 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:05:21.786792 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:05:21.792915 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:05:21.794759 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:05:21.794759 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:05:21.794759 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:05:21.794759 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:05:21.794759 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:05:21.794759 ignition[958]: INFO : files: files passed Aug 13 07:05:21.794759 ignition[958]: INFO : Ignition finished successfully Aug 13 07:05:21.797181 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:05:21.811691 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:05:21.813948 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:05:21.816184 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:05:21.816304 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:05:21.825732 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:05:21.828910 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:05:21.828910 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:05:21.832622 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:05:21.835793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:05:21.837375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:05:21.856665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:05:21.887000 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:05:21.888220 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:05:21.891832 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:05:21.894174 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:05:21.896553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:05:21.899152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:05:21.917924 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:05:21.922180 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:05:21.937686 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:05:21.939958 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:05:21.942309 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:05:21.944092 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:05:21.945071 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:05:21.947546 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:05:21.949579 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:05:21.951366 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:05:21.953510 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:05:21.955755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:05:21.957956 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:05:21.959970 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:05:21.962371 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:05:21.964415 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:05:21.966407 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:05:21.968012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:05:21.968988 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:05:21.971190 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:05:21.973308 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:05:21.975615 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:05:21.976567 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:05:21.979055 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:05:21.980031 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:05:21.982293 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:05:21.983348 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:05:21.985656 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:05:21.987366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:05:21.990543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:05:21.993256 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:05:21.995118 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:05:21.996973 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:05:21.997834 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:05:21.999748 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:05:22.000632 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:05:22.002661 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:05:22.003802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:05:22.006281 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:05:22.007245 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:05:22.021654 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:05:22.023521 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:05:22.023640 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:05:22.028400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:05:22.030226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:05:22.031383 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:05:22.034218 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:05:22.035517 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:05:22.044819 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:05:22.044974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:05:22.088229 ignition[1012]: INFO : Ignition 2.19.0 Aug 13 07:05:22.088229 ignition[1012]: INFO : Stage: umount Aug 13 07:05:22.089960 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:05:22.089960 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:05:22.089960 ignition[1012]: INFO : umount: umount passed Aug 13 07:05:22.089960 ignition[1012]: INFO : Ignition finished successfully Aug 13 07:05:22.092919 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:05:22.093049 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:05:22.094567 systemd[1]: Stopped target network.target - Network. Aug 13 07:05:22.096016 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:05:22.096093 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:05:22.097874 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:05:22.097938 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:05:22.099824 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:05:22.099892 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:05:22.101684 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:05:22.101752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:05:22.103705 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:05:22.105709 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:05:22.108697 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:05:22.116615 systemd-networkd[783]: eth0: DHCPv6 lease lost Aug 13 07:05:22.118771 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:05:22.118915 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:05:22.123106 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:05:22.124126 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:05:22.127115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:05:22.128041 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:05:22.142609 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:05:22.143534 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:05:22.143595 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:05:22.143845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:05:22.143890 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:05:22.147363 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:05:22.147413 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:05:22.149609 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:05:22.149658 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:05:22.152661 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:05:22.169294 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:05:22.169517 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:05:22.170669 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:05:22.170786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:05:22.174228 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:05:22.174316 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:05:22.174538 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:05:22.174591 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:05:22.174968 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:05:22.175025 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:05:22.175750 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:05:22.175806 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:05:22.176406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:05:22.176457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:05:22.178257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:05:22.186088 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:05:22.186155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:05:22.187340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:05:22.187403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:05:22.196665 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:05:22.196792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:05:22.541886 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:05:22.542046 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:05:22.543299 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:05:22.545647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:05:22.545704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:05:22.559659 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:05:22.568595 systemd[1]: Switching root. Aug 13 07:05:22.602069 systemd-journald[191]: Journal stopped Aug 13 07:05:24.442299 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Aug 13 07:05:24.442370 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:05:24.442396 kernel: SELinux: policy capability open_perms=1 Aug 13 07:05:24.442409 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:05:24.442422 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:05:24.442433 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:05:24.442445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:05:24.442456 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:05:24.442473 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:05:24.442507 kernel: audit: type=1403 audit(1755068723.660:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:05:24.442519 systemd[1]: Successfully loaded SELinux policy in 40.939ms. Aug 13 07:05:24.442556 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.230ms. Aug 13 07:05:24.442570 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:05:24.442583 systemd[1]: Detected virtualization kvm. Aug 13 07:05:24.442595 systemd[1]: Detected architecture x86-64. Aug 13 07:05:24.442607 systemd[1]: Detected first boot. Aug 13 07:05:24.442624 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:05:24.442636 zram_generator::config[1058]: No configuration found. Aug 13 07:05:24.442649 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:05:24.442664 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:05:24.442676 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:05:24.442689 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:05:24.442702 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:05:24.442714 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:05:24.442726 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:05:24.442739 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:05:24.442751 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:05:24.442763 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:05:24.442779 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:05:24.442791 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:05:24.442805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:05:24.442817 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:05:24.442829 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:05:24.442842 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:05:24.442854 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:05:24.442866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:05:24.442878 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:05:24.442893 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:05:24.442905 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:05:24.442917 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:05:24.442930 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:05:24.442942 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:05:24.442954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:05:24.442966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:05:24.442979 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:05:24.442994 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:05:24.443007 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:05:24.443019 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:05:24.443031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:05:24.443043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:05:24.443055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:05:24.443067 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:05:24.443079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:05:24.443094 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:05:24.443109 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:05:24.443121 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:24.443133 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:05:24.443145 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:05:24.443157 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:05:24.443169 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:05:24.443182 systemd[1]: Reached target machines.target - Containers. Aug 13 07:05:24.443195 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:05:24.443210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:05:24.443222 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:05:24.443235 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:05:24.443247 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:05:24.443261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:05:24.443274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:05:24.443295 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:05:24.443308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:05:24.443320 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:05:24.443336 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:05:24.443349 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:05:24.443361 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:05:24.443373 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:05:24.443385 kernel: fuse: init (API version 7.39) Aug 13 07:05:24.443397 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:05:24.443409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:05:24.443422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:05:24.443437 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:05:24.443449 kernel: loop: module loaded Aug 13 07:05:24.443461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:05:24.443473 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:05:24.443485 systemd[1]: Stopped verity-setup.service. Aug 13 07:05:24.443547 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:24.443560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:05:24.443572 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:05:24.443585 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:05:24.443600 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:05:24.443613 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:05:24.443625 kernel: ACPI: bus type drm_connector registered Aug 13 07:05:24.443636 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:05:24.443648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:05:24.443663 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:05:24.443675 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:05:24.443688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:05:24.443703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:05:24.443735 systemd-journald[1128]: Collecting audit messages is disabled. Aug 13 07:05:24.443757 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:05:24.443770 systemd-journald[1128]: Journal started Aug 13 07:05:24.443795 systemd-journald[1128]: Runtime Journal (/run/log/journal/11284cd5ed44470ea09011b139c73593) is 6.0M, max 48.4M, 42.3M free. Aug 13 07:05:24.196552 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:05:24.216640 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:05:24.217142 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:05:24.446104 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:05:24.447162 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:05:24.447353 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:05:24.448764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:05:24.448940 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:05:24.450532 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:05:24.450711 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:05:24.452113 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:05:24.452298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:05:24.453827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:05:24.455340 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:05:24.456877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:05:24.474300 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:05:24.481605 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:05:24.483907 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:05:24.485043 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:05:24.485066 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:05:24.487059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:05:24.489421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:05:24.492675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:05:24.493840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:05:24.496707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:05:24.499751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:05:24.501396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:05:24.507081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:05:24.508335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:05:24.509411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:05:24.512528 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:05:24.523536 systemd-journald[1128]: Time spent on flushing to /var/log/journal/11284cd5ed44470ea09011b139c73593 is 15.734ms for 953 entries. Aug 13 07:05:24.523536 systemd-journald[1128]: System Journal (/var/log/journal/11284cd5ed44470ea09011b139c73593) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:05:24.571592 systemd-journald[1128]: Received client request to flush runtime journal. Aug 13 07:05:24.571632 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:05:24.519608 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:05:24.522336 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:05:24.524853 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:05:24.527930 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:05:24.529567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:05:24.542989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:05:24.555950 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:05:24.559049 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:05:24.564945 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:05:24.575459 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:05:24.605073 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:05:24.609161 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:05:24.619066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:05:24.623781 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:05:24.628002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:05:24.629640 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:05:24.640516 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:05:24.640094 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:05:24.647163 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 07:05:24.647183 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 07:05:24.656793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:05:24.683473 kernel: loop2: detected capacity change from 0 to 229808 Aug 13 07:05:24.719945 kernel: loop3: detected capacity change from 0 to 140768 Aug 13 07:05:24.738618 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:05:24.747522 kernel: loop5: detected capacity change from 0 to 229808 Aug 13 07:05:24.751862 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:05:24.752546 (sd-merge)[1196]: Merged extensions into '/usr'. Aug 13 07:05:24.765556 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:05:24.765572 systemd[1]: Reloading... Aug 13 07:05:24.830622 zram_generator::config[1219]: No configuration found. Aug 13 07:05:24.947386 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:05:24.989798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:05:25.040605 systemd[1]: Reloading finished in 274 ms. Aug 13 07:05:25.076317 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:05:25.077892 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:05:25.091788 systemd[1]: Starting ensure-sysext.service... Aug 13 07:05:25.094145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:05:25.101560 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:05:25.101575 systemd[1]: Reloading... Aug 13 07:05:25.256825 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:05:25.257229 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:05:25.258307 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:05:25.259393 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Aug 13 07:05:25.259564 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Aug 13 07:05:25.266368 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:05:25.266483 systemd-tmpfiles[1261]: Skipping /boot Aug 13 07:05:25.276580 zram_generator::config[1290]: No configuration found. Aug 13 07:05:25.281194 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:05:25.281323 systemd-tmpfiles[1261]: Skipping /boot Aug 13 07:05:25.553013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:05:25.602781 systemd[1]: Reloading finished in 500 ms. Aug 13 07:05:25.625309 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:05:25.642460 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:05:25.645403 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:05:25.648010 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:05:25.652476 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:05:25.655698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:05:25.663609 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:05:25.666217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:25.666407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:05:25.670234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:05:25.676571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:05:25.679131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:05:25.680392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:05:25.680517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:25.682933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:05:25.683115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:05:25.686866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:05:25.687651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:05:25.690743 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:05:25.690930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:05:25.697206 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:05:25.701436 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:05:25.705580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:25.705771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:05:25.719852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:05:25.722269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:05:25.749164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:05:25.750455 augenrules[1354]: No rules Aug 13 07:05:25.750628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:05:25.750793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:25.752219 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:05:25.754189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:05:25.754561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:05:25.756199 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:05:25.758714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:05:25.759037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:05:25.778019 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:05:25.778233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:05:25.787287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:25.787891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:05:25.793742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:05:25.799126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:05:25.801775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:05:25.804979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:05:25.806198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:05:25.806286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:05:25.807017 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:05:25.808650 systemd[1]: Finished ensure-sysext.service. Aug 13 07:05:25.809929 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:05:25.811680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:05:25.811891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:05:25.813660 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:05:25.813837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:05:25.815409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:05:25.815593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:05:25.817197 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:05:25.817390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:05:25.826208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:05:25.826334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:05:25.835665 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:05:25.836130 systemd-resolved[1329]: Positive Trust Anchors: Aug 13 07:05:25.836147 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:05:25.836179 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:05:25.838207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:05:25.840069 systemd-resolved[1329]: Defaulting to hostname 'linux'. Aug 13 07:05:25.840503 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:05:25.841669 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:05:25.841828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:05:25.845569 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:05:25.856061 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:05:25.870404 systemd-udevd[1383]: Using default interface naming scheme 'v255'. Aug 13 07:05:25.887897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:05:25.901672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:05:25.902810 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:05:25.909339 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:05:25.929048 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:05:26.119539 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1392) Aug 13 07:05:26.127249 systemd-networkd[1391]: lo: Link UP Aug 13 07:05:26.127259 systemd-networkd[1391]: lo: Gained carrier Aug 13 07:05:26.130194 systemd-networkd[1391]: Enumeration completed Aug 13 07:05:26.130330 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:05:26.131124 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:05:26.131132 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:05:26.132963 systemd-networkd[1391]: eth0: Link UP Aug 13 07:05:26.133107 systemd-networkd[1391]: eth0: Gained carrier Aug 13 07:05:26.133186 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:05:26.133623 systemd[1]: Reached target network.target - Network. Aug 13 07:05:26.137998 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:05:26.139669 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:05:26.145485 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:05:26.148647 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:05:26.150263 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Aug 13 07:05:26.762541 systemd-resolved[1329]: Clock change detected. Flushing caches. Aug 13 07:05:26.762741 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:05:26.762864 systemd-timesyncd[1382]: Initial clock synchronization to Wed 2025-08-13 07:05:26.762491 UTC. Aug 13 07:05:26.774923 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:05:26.777965 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:05:26.778175 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:05:26.815732 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:05:26.838082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:05:26.863843 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:05:26.866260 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:05:26.869580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:05:26.881907 kernel: kvm_amd: TSC scaling supported Aug 13 07:05:26.881975 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:05:26.881993 kernel: kvm_amd: Nested Paging enabled Aug 13 07:05:26.882006 kernel: kvm_amd: LBR virtualization supported Aug 13 07:05:26.882709 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:05:26.882737 kernel: kvm_amd: Virtual GIF supported Aug 13 07:05:26.901376 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:05:26.907712 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:05:26.941471 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:05:26.949947 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:05:26.959731 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:05:26.993439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:05:26.995029 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:05:26.997225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:05:26.998348 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:05:26.999524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:05:27.000759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:05:27.002176 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:05:27.003522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:05:27.004815 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:05:27.006055 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:05:27.006095 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:05:27.007023 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:05:27.008914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:05:27.011920 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:05:27.022698 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:05:27.025202 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:05:27.026909 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:05:27.028084 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:05:27.029061 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:05:27.030047 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:05:27.030075 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:05:27.031120 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:05:27.033360 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:05:27.036476 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:05:27.036828 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:05:27.040852 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:05:27.042322 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:05:27.043885 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:05:27.049503 jq[1437]: false Aug 13 07:05:27.049956 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:05:27.052334 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:05:27.055094 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:05:27.059854 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:05:27.061514 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:05:27.061982 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:05:27.063853 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:05:27.066827 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:05:27.068042 dbus-daemon[1436]: [system] SELinux support is enabled Aug 13 07:05:27.068694 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:05:27.072043 extend-filesystems[1438]: Found loop3 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found loop4 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found loop5 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found sr0 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda1 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda2 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda3 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found usr Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda4 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda6 Aug 13 07:05:27.073012 extend-filesystems[1438]: Found vda7 Aug 13 07:05:27.092794 extend-filesystems[1438]: Found vda9 Aug 13 07:05:27.092794 extend-filesystems[1438]: Checking size of /dev/vda9 Aug 13 07:05:27.096598 update_engine[1447]: I20250813 07:05:27.088196 1447 main.cc:92] Flatcar Update Engine starting Aug 13 07:05:27.096598 update_engine[1447]: I20250813 07:05:27.090303 1447 update_check_scheduler.cc:74] Next update check in 3m37s Aug 13 07:05:27.081765 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:05:27.097180 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:05:27.097427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:05:27.097916 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:05:27.098128 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:05:27.099924 extend-filesystems[1438]: Resized partition /dev/vda9 Aug 13 07:05:27.102853 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:05:27.109697 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:05:27.111942 jq[1449]: true Aug 13 07:05:27.111581 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:05:27.111870 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:05:27.119697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1398) Aug 13 07:05:27.124134 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:05:27.127615 jq[1468]: true Aug 13 07:05:27.127734 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:05:27.267142 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:05:27.331382 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:05:27.331473 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:05:27.333082 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:05:27.333109 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:05:27.349025 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:05:27.350722 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:05:27.350745 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:05:27.350965 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:05:27.351141 systemd-logind[1445]: New seat seat0. Aug 13 07:05:27.356154 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:05:27.357687 tar[1460]: linux-amd64/LICENSE Aug 13 07:05:27.357966 tar[1460]: linux-amd64/helm Aug 13 07:05:27.391048 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:05:27.398827 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:05:27.399076 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:05:27.407880 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:05:27.506962 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:05:27.516250 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:05:27.518853 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:05:27.520115 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:05:27.564704 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:05:27.577761 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:05:27.585928 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:05:27.585928 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:05:27.585928 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:05:27.593473 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Aug 13 07:05:27.587535 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:05:27.594567 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:05:27.587789 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:05:27.597072 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:05:27.599401 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:05:27.761850 containerd[1462]: time="2025-08-13T07:05:27.761733407Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:05:27.803237 containerd[1462]: time="2025-08-13T07:05:27.803084270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.805337 containerd[1462]: time="2025-08-13T07:05:27.805303902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:05:27.805337 containerd[1462]: time="2025-08-13T07:05:27.805333066Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:05:27.805405 containerd[1462]: time="2025-08-13T07:05:27.805350739Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:05:27.805584 containerd[1462]: time="2025-08-13T07:05:27.805556646Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:05:27.805584 containerd[1462]: time="2025-08-13T07:05:27.805579138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.805670 containerd[1462]: time="2025-08-13T07:05:27.805651243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:05:27.805724 containerd[1462]: time="2025-08-13T07:05:27.805667193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.805924 containerd[1462]: time="2025-08-13T07:05:27.805900881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:05:27.805924 containerd[1462]: time="2025-08-13T07:05:27.805921470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.806157 containerd[1462]: time="2025-08-13T07:05:27.805934985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:05:27.806157 containerd[1462]: time="2025-08-13T07:05:27.805945725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.806157 containerd[1462]: time="2025-08-13T07:05:27.806039090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.806315 containerd[1462]: time="2025-08-13T07:05:27.806291704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:05:27.806485 containerd[1462]: time="2025-08-13T07:05:27.806462173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:05:27.806485 containerd[1462]: time="2025-08-13T07:05:27.806482041Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:05:27.806603 containerd[1462]: time="2025-08-13T07:05:27.806584643Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:05:27.806663 containerd[1462]: time="2025-08-13T07:05:27.806646529Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:05:27.811766 containerd[1462]: time="2025-08-13T07:05:27.811711576Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:05:27.811766 containerd[1462]: time="2025-08-13T07:05:27.811772400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:05:27.811951 containerd[1462]: time="2025-08-13T07:05:27.811791185Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:05:27.811951 containerd[1462]: time="2025-08-13T07:05:27.811807937Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:05:27.811951 containerd[1462]: time="2025-08-13T07:05:27.811823656Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:05:27.812039 containerd[1462]: time="2025-08-13T07:05:27.812016718Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:05:27.812245 containerd[1462]: time="2025-08-13T07:05:27.812223586Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:05:27.812383 containerd[1462]: time="2025-08-13T07:05:27.812359571Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:05:27.812383 containerd[1462]: time="2025-08-13T07:05:27.812379879Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:05:27.812435 containerd[1462]: time="2025-08-13T07:05:27.812402502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:05:27.812435 containerd[1462]: time="2025-08-13T07:05:27.812419063Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812435 containerd[1462]: time="2025-08-13T07:05:27.812432728Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812488 containerd[1462]: time="2025-08-13T07:05:27.812445222Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812488 containerd[1462]: time="2025-08-13T07:05:27.812458577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812531 containerd[1462]: time="2025-08-13T07:05:27.812487050Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812531 containerd[1462]: time="2025-08-13T07:05:27.812503471Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812531 containerd[1462]: time="2025-08-13T07:05:27.812517367Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812531 containerd[1462]: time="2025-08-13T07:05:27.812528919Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:05:27.812600 containerd[1462]: time="2025-08-13T07:05:27.812549307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812600 containerd[1462]: time="2025-08-13T07:05:27.812564064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812600 containerd[1462]: time="2025-08-13T07:05:27.812577199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812600 containerd[1462]: time="2025-08-13T07:05:27.812590364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812697 containerd[1462]: time="2025-08-13T07:05:27.812613487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812697 containerd[1462]: time="2025-08-13T07:05:27.812628345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812697 containerd[1462]: time="2025-08-13T07:05:27.812641139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812697 containerd[1462]: time="2025-08-13T07:05:27.812653322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812697 containerd[1462]: time="2025-08-13T07:05:27.812666897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812700791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812714527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812726008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812738421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812753550Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812774289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812787453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812802 containerd[1462]: time="2025-08-13T07:05:27.812798744Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812839311Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812857915Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812868926Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812879907Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812889124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812900726Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812910654Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:05:27.812955 containerd[1462]: time="2025-08-13T07:05:27.812920463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:05:27.813247 containerd[1462]: time="2025-08-13T07:05:27.813187964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:05:27.813247 containerd[1462]: time="2025-08-13T07:05:27.813246003Z" level=info msg="Connect containerd service" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.813284074Z" level=info msg="using legacy CRI server" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.813291488Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.813417104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814483844Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814789647Z" level=info msg="Start subscribing containerd event" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814826747Z" level=info msg="Start recovering state" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814883453Z" level=info msg="Start event monitor" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814893883Z" level=info msg="Start snapshots syncer" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814901878Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.814910975Z" level=info msg="Start streaming server" Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.815166474Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.815216317Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:05:27.820937 containerd[1462]: time="2025-08-13T07:05:27.815261212Z" level=info msg="containerd successfully booted in 0.055511s" Aug 13 07:05:27.820858 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:05:27.833842 tar[1460]: linux-amd64/README.md Aug 13 07:05:27.848741 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:05:28.417927 systemd-networkd[1391]: eth0: Gained IPv6LL Aug 13 07:05:28.421648 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:05:28.423616 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:05:28.433919 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:05:28.436165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:05:28.438289 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:05:28.459697 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:05:28.459960 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:05:28.461815 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:05:28.464237 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:05:29.840252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:05:29.842112 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:05:29.843797 systemd[1]: Startup finished in 1.051s (kernel) + 6.897s (initrd) + 5.610s (userspace) = 13.560s. Aug 13 07:05:29.860120 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:05:30.405642 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:05:30.406996 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:43422.service - OpenSSH per-connection server daemon (10.0.0.1:43422). Aug 13 07:05:30.467083 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 43422 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:30.469719 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:30.479444 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:05:30.484976 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:05:30.487503 systemd-logind[1445]: New session 1 of user core. Aug 13 07:05:30.526522 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:05:30.531435 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:05:30.543400 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:05:30.681404 systemd[1564]: Queued start job for default target default.target. Aug 13 07:05:30.682885 systemd[1564]: Created slice app.slice - User Application Slice. Aug 13 07:05:30.682912 systemd[1564]: Reached target paths.target - Paths. Aug 13 07:05:30.682926 systemd[1564]: Reached target timers.target - Timers. Aug 13 07:05:30.685424 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:05:30.704740 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:05:30.704872 systemd[1564]: Reached target sockets.target - Sockets. Aug 13 07:05:30.704886 systemd[1564]: Reached target basic.target - Basic System. Aug 13 07:05:30.704928 systemd[1564]: Reached target default.target - Main User Target. Aug 13 07:05:30.704960 systemd[1564]: Startup finished in 150ms. Aug 13 07:05:30.705169 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:05:30.707352 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:05:30.767923 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:43434.service - OpenSSH per-connection server daemon (10.0.0.1:43434). Aug 13 07:05:30.817509 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 43434 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:30.819525 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:30.824256 systemd-logind[1445]: New session 2 of user core. Aug 13 07:05:30.833926 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:05:30.911353 sshd[1575]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:30.923664 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:43434.service: Deactivated successfully. Aug 13 07:05:30.925571 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:05:30.926266 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:05:30.935921 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:43442.service - OpenSSH per-connection server daemon (10.0.0.1:43442). Aug 13 07:05:30.936472 systemd-logind[1445]: Removed session 2. Aug 13 07:05:30.961354 kubelet[1548]: E0813 07:05:30.961246 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:05:30.965521 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 43442 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:30.966105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:05:30.966348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:05:30.966746 systemd[1]: kubelet.service: Consumed 2.391s CPU time. Aug 13 07:05:30.967658 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:30.972195 systemd-logind[1445]: New session 3 of user core. Aug 13 07:05:30.992852 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:05:31.044656 sshd[1583]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:31.052533 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:43442.service: Deactivated successfully. Aug 13 07:05:31.054451 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:05:31.056170 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:05:31.063100 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:43450.service - OpenSSH per-connection server daemon (10.0.0.1:43450). Aug 13 07:05:31.063989 systemd-logind[1445]: Removed session 3. Aug 13 07:05:31.089916 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 43450 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:31.091999 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:31.096436 systemd-logind[1445]: New session 4 of user core. Aug 13 07:05:31.105846 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:05:31.161030 sshd[1591]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:31.168223 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:43450.service: Deactivated successfully. Aug 13 07:05:31.169904 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:05:31.171532 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:05:31.172820 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:43464.service - OpenSSH per-connection server daemon (10.0.0.1:43464). Aug 13 07:05:31.173617 systemd-logind[1445]: Removed session 4. Aug 13 07:05:31.205548 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 43464 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:31.207614 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:31.211687 systemd-logind[1445]: New session 5 of user core. Aug 13 07:05:31.221837 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:05:31.282846 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:05:31.283280 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:05:31.300730 sudo[1601]: pam_unix(sudo:session): session closed for user root Aug 13 07:05:31.302879 sshd[1598]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:31.321005 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:43464.service: Deactivated successfully. Aug 13 07:05:31.323092 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:05:31.324864 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:05:31.334922 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:43466.service - OpenSSH per-connection server daemon (10.0.0.1:43466). Aug 13 07:05:31.335782 systemd-logind[1445]: Removed session 5. Aug 13 07:05:31.363799 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 43466 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:31.365982 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:31.370119 systemd-logind[1445]: New session 6 of user core. Aug 13 07:05:31.379790 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:05:31.434125 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:05:31.434468 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:05:31.438227 sudo[1610]: pam_unix(sudo:session): session closed for user root Aug 13 07:05:31.445319 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:05:31.445691 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:05:31.464894 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:05:31.466642 auditctl[1613]: No rules Aug 13 07:05:31.468117 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:05:31.468414 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:05:31.470211 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:05:31.502814 augenrules[1631]: No rules Aug 13 07:05:31.504628 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:05:31.506165 sudo[1609]: pam_unix(sudo:session): session closed for user root Aug 13 07:05:31.508265 sshd[1606]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:31.515464 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:43466.service: Deactivated successfully. Aug 13 07:05:31.517338 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:05:31.519156 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:05:31.531041 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:43474.service - OpenSSH per-connection server daemon (10.0.0.1:43474). Aug 13 07:05:31.531909 systemd-logind[1445]: Removed session 6. Aug 13 07:05:31.556992 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 43474 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:05:31.558553 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:31.562254 systemd-logind[1445]: New session 7 of user core. Aug 13 07:05:31.578792 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:05:31.632156 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:05:31.632505 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:05:32.180911 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:05:32.181341 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:05:32.888285 dockerd[1661]: time="2025-08-13T07:05:32.888125982Z" level=info msg="Starting up" Aug 13 07:05:33.837553 dockerd[1661]: time="2025-08-13T07:05:33.837478938Z" level=info msg="Loading containers: start." Aug 13 07:05:33.953709 kernel: Initializing XFRM netlink socket Aug 13 07:05:34.039417 systemd-networkd[1391]: docker0: Link UP Aug 13 07:05:34.062356 dockerd[1661]: time="2025-08-13T07:05:34.062304967Z" level=info msg="Loading containers: done." Aug 13 07:05:34.081309 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2941240259-merged.mount: Deactivated successfully. Aug 13 07:05:34.082066 dockerd[1661]: time="2025-08-13T07:05:34.082010595Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:05:34.082175 dockerd[1661]: time="2025-08-13T07:05:34.082153273Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:05:34.082331 dockerd[1661]: time="2025-08-13T07:05:34.082304526Z" level=info msg="Daemon has completed initialization" Aug 13 07:05:34.119967 dockerd[1661]: time="2025-08-13T07:05:34.119806342Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:05:34.120031 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:05:34.828161 containerd[1462]: time="2025-08-13T07:05:34.828108198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 07:05:35.900643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989545776.mount: Deactivated successfully. Aug 13 07:05:37.325228 containerd[1462]: time="2025-08-13T07:05:37.325144009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:37.325739 containerd[1462]: time="2025-08-13T07:05:37.325663283Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 07:05:37.327080 containerd[1462]: time="2025-08-13T07:05:37.327047649Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:37.329835 containerd[1462]: time="2025-08-13T07:05:37.329792505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:37.331339 containerd[1462]: time="2025-08-13T07:05:37.331295303Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.503138374s" Aug 13 07:05:37.331401 containerd[1462]: time="2025-08-13T07:05:37.331344856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 07:05:37.332333 containerd[1462]: time="2025-08-13T07:05:37.332306980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 07:05:38.733817 containerd[1462]: time="2025-08-13T07:05:38.733746809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:38.734498 containerd[1462]: time="2025-08-13T07:05:38.734413449Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 07:05:38.735703 containerd[1462]: time="2025-08-13T07:05:38.735637574Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:38.738509 containerd[1462]: time="2025-08-13T07:05:38.738472269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:38.739414 containerd[1462]: time="2025-08-13T07:05:38.739370353Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.407032906s" Aug 13 07:05:38.739456 containerd[1462]: time="2025-08-13T07:05:38.739412062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 07:05:38.740210 containerd[1462]: time="2025-08-13T07:05:38.740172558Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 07:05:40.443921 containerd[1462]: time="2025-08-13T07:05:40.443852798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:40.444730 containerd[1462]: time="2025-08-13T07:05:40.444694305Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 07:05:40.446103 containerd[1462]: time="2025-08-13T07:05:40.446048484Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:40.449244 containerd[1462]: time="2025-08-13T07:05:40.449178333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:40.450505 containerd[1462]: time="2025-08-13T07:05:40.450458854Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.71024055s" Aug 13 07:05:40.450552 containerd[1462]: time="2025-08-13T07:05:40.450509308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 07:05:40.451175 containerd[1462]: time="2025-08-13T07:05:40.451135643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:05:41.216752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:05:41.288006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:05:41.507623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:05:41.512582 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:05:41.711028 kubelet[1880]: E0813 07:05:41.710945 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:05:41.718416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:05:41.718646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:05:42.531548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050197326.mount: Deactivated successfully. Aug 13 07:05:43.637259 containerd[1462]: time="2025-08-13T07:05:43.637176959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:43.638190 containerd[1462]: time="2025-08-13T07:05:43.638114918Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 07:05:43.639175 containerd[1462]: time="2025-08-13T07:05:43.639136303Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:43.641156 containerd[1462]: time="2025-08-13T07:05:43.641112348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:43.641810 containerd[1462]: time="2025-08-13T07:05:43.641766405Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 3.190596247s" Aug 13 07:05:43.641810 containerd[1462]: time="2025-08-13T07:05:43.641798214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 07:05:43.642344 containerd[1462]: time="2025-08-13T07:05:43.642318660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 07:05:44.126170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612404937.mount: Deactivated successfully. Aug 13 07:05:45.888023 containerd[1462]: time="2025-08-13T07:05:45.887948475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:45.889218 containerd[1462]: time="2025-08-13T07:05:45.889168493Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 07:05:45.890619 containerd[1462]: time="2025-08-13T07:05:45.890580440Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:45.893899 containerd[1462]: time="2025-08-13T07:05:45.893854840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:45.895020 containerd[1462]: time="2025-08-13T07:05:45.894977315Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.2526298s" Aug 13 07:05:45.895062 containerd[1462]: time="2025-08-13T07:05:45.895016839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 07:05:45.895643 containerd[1462]: time="2025-08-13T07:05:45.895611854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:05:46.476767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980903256.mount: Deactivated successfully. Aug 13 07:05:46.483117 containerd[1462]: time="2025-08-13T07:05:46.483075308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:46.483827 containerd[1462]: time="2025-08-13T07:05:46.483774219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:05:46.484877 containerd[1462]: time="2025-08-13T07:05:46.484835348Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:46.487487 containerd[1462]: time="2025-08-13T07:05:46.487445993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:46.488417 containerd[1462]: time="2025-08-13T07:05:46.488357061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.53314ms" Aug 13 07:05:46.488417 containerd[1462]: time="2025-08-13T07:05:46.488410772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:05:46.489106 containerd[1462]: time="2025-08-13T07:05:46.489063556Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 07:05:47.808478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309660074.mount: Deactivated successfully. Aug 13 07:05:50.952170 containerd[1462]: time="2025-08-13T07:05:50.952100720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:50.953090 containerd[1462]: time="2025-08-13T07:05:50.953025564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 07:05:50.954413 containerd[1462]: time="2025-08-13T07:05:50.954371117Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:50.958821 containerd[1462]: time="2025-08-13T07:05:50.958749797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:05:50.960122 containerd[1462]: time="2025-08-13T07:05:50.960083016Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.470986528s" Aug 13 07:05:50.960184 containerd[1462]: time="2025-08-13T07:05:50.960126087Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 07:05:51.849015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:05:51.859900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:05:52.026376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:05:52.030964 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:05:52.074531 kubelet[2040]: E0813 07:05:52.074471 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:05:52.079302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:05:52.079559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:05:54.752691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:05:54.762891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:05:54.787950 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit session-7.scope)... Aug 13 07:05:54.787975 systemd[1]: Reloading... Aug 13 07:05:54.864708 zram_generator::config[2095]: No configuration found. Aug 13 07:05:55.481658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:05:55.560476 systemd[1]: Reloading finished in 771 ms. Aug 13 07:05:55.610409 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:05:55.610541 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:05:55.610885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:05:55.612720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:05:55.791256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:05:55.797151 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:05:55.844382 kubelet[2144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:05:55.844382 kubelet[2144]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:05:55.844382 kubelet[2144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:05:55.844879 kubelet[2144]: I0813 07:05:55.844423 2144 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:05:57.116743 kubelet[2144]: I0813 07:05:57.116657 2144 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:05:57.116743 kubelet[2144]: I0813 07:05:57.116733 2144 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:05:57.117302 kubelet[2144]: I0813 07:05:57.116963 2144 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:05:57.142649 kubelet[2144]: E0813 07:05:57.142590 2144 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:05:57.143255 kubelet[2144]: I0813 07:05:57.143215 2144 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:05:57.151406 kubelet[2144]: E0813 07:05:57.151327 2144 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:05:57.151406 kubelet[2144]: I0813 07:05:57.151388 2144 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:05:57.158376 kubelet[2144]: I0813 07:05:57.157889 2144 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:05:57.158376 kubelet[2144]: I0813 07:05:57.158235 2144 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:05:57.158883 kubelet[2144]: I0813 07:05:57.158258 2144 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:05:57.158994 kubelet[2144]: I0813 07:05:57.158898 2144 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:05:57.158994 kubelet[2144]: I0813 07:05:57.158917 2144 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:05:57.159145 kubelet[2144]: I0813 07:05:57.159124 2144 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:05:57.161791 kubelet[2144]: I0813 07:05:57.161759 2144 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:05:57.161791 kubelet[2144]: I0813 07:05:57.161785 2144 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:05:57.161856 kubelet[2144]: I0813 07:05:57.161823 2144 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:05:57.161856 kubelet[2144]: I0813 07:05:57.161849 2144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:05:57.167990 kubelet[2144]: I0813 07:05:57.167914 2144 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:05:57.169252 kubelet[2144]: I0813 07:05:57.168459 2144 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:05:57.170921 kubelet[2144]: W0813 07:05:57.170520 2144 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:05:57.171648 kubelet[2144]: E0813 07:05:57.171618 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:05:57.171881 kubelet[2144]: E0813 07:05:57.171825 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:05:57.173817 kubelet[2144]: I0813 07:05:57.173792 2144 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:05:57.173866 kubelet[2144]: I0813 07:05:57.173842 2144 server.go:1289] "Started kubelet" Aug 13 07:05:57.174717 kubelet[2144]: I0813 07:05:57.174449 2144 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:05:57.177411 kubelet[2144]: I0813 07:05:57.176557 2144 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:05:57.177411 kubelet[2144]: I0813 07:05:57.176582 2144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:05:57.177411 kubelet[2144]: I0813 07:05:57.176634 2144 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:05:57.177806 kubelet[2144]: I0813 07:05:57.177789 2144 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:05:57.178352 kubelet[2144]: E0813 07:05:57.177378 2144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b41bc63bb3059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:05:57.173809241 +0000 UTC m=+1.372040526,LastTimestamp:2025-08-13 07:05:57.173809241 +0000 UTC m=+1.372040526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:05:57.179126 kubelet[2144]: I0813 07:05:57.178666 2144 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:05:57.179475 kubelet[2144]: E0813 07:05:57.179438 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:05:57.179569 kubelet[2144]: I0813 07:05:57.179549 2144 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:05:57.179884 kubelet[2144]: I0813 07:05:57.179867 2144 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:05:57.180012 kubelet[2144]: I0813 07:05:57.180001 2144 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:05:57.180172 kubelet[2144]: E0813 07:05:57.180139 2144 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:05:57.180446 kubelet[2144]: I0813 07:05:57.180428 2144 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:05:57.180519 kubelet[2144]: E0813 07:05:57.180496 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:05:57.180626 kubelet[2144]: I0813 07:05:57.180609 2144 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:05:57.180970 kubelet[2144]: E0813 07:05:57.180942 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Aug 13 07:05:57.181804 kubelet[2144]: I0813 07:05:57.181760 2144 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:05:57.184599 kubelet[2144]: I0813 07:05:57.184549 2144 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:05:57.202089 kubelet[2144]: I0813 07:05:57.202046 2144 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:05:57.202319 kubelet[2144]: I0813 07:05:57.202283 2144 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:05:57.202319 kubelet[2144]: I0813 07:05:57.202308 2144 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:05:57.205114 kubelet[2144]: I0813 07:05:57.205071 2144 policy_none.go:49] "None policy: Start" Aug 13 07:05:57.205114 kubelet[2144]: I0813 07:05:57.205107 2144 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:05:57.205114 kubelet[2144]: I0813 07:05:57.205126 2144 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:05:57.211152 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:05:57.280730 kubelet[2144]: E0813 07:05:57.280636 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:05:57.333892 kubelet[2144]: I0813 07:05:57.333821 2144 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:05:57.333892 kubelet[2144]: I0813 07:05:57.333889 2144 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:05:57.334054 kubelet[2144]: I0813 07:05:57.333937 2144 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:05:57.334054 kubelet[2144]: I0813 07:05:57.333951 2144 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:05:57.334128 kubelet[2144]: E0813 07:05:57.334072 2144 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:05:57.335379 kubelet[2144]: E0813 07:05:57.335338 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:05:57.343415 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:05:57.346668 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:05:57.357579 kubelet[2144]: E0813 07:05:57.357531 2144 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:05:57.357888 kubelet[2144]: I0813 07:05:57.357806 2144 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:05:57.357888 kubelet[2144]: I0813 07:05:57.357831 2144 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:05:57.358489 kubelet[2144]: I0813 07:05:57.358027 2144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:05:57.359063 kubelet[2144]: E0813 07:05:57.359010 2144 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:05:57.359150 kubelet[2144]: E0813 07:05:57.359084 2144 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:05:57.382286 kubelet[2144]: E0813 07:05:57.382174 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Aug 13 07:05:57.445876 systemd[1]: Created slice kubepods-burstable-pod5993264d76144ec1969829c940122ab0.slice - libcontainer container kubepods-burstable-pod5993264d76144ec1969829c940122ab0.slice. Aug 13 07:05:57.459295 kubelet[2144]: I0813 07:05:57.459237 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:05:57.459664 kubelet[2144]: E0813 07:05:57.459625 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:05:57.459785 kubelet[2144]: E0813 07:05:57.459727 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Aug 13 07:05:57.462921 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 07:05:57.475167 kubelet[2144]: E0813 07:05:57.475124 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:05:57.478485 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 07:05:57.480260 kubelet[2144]: E0813 07:05:57.480239 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:05:57.481317 kubelet[2144]: I0813 07:05:57.481275 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5993264d76144ec1969829c940122ab0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5993264d76144ec1969829c940122ab0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:05:57.481317 kubelet[2144]: I0813 07:05:57.481310 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:05:57.481427 kubelet[2144]: I0813 07:05:57.481330 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:05:57.481427 kubelet[2144]: I0813 07:05:57.481371 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:05:57.481427 kubelet[2144]: I0813 07:05:57.481391 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:05:57.481427 kubelet[2144]: I0813 07:05:57.481411 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5993264d76144ec1969829c940122ab0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5993264d76144ec1969829c940122ab0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:05:57.481427 kubelet[2144]: I0813 07:05:57.481428 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:05:57.481561 kubelet[2144]: I0813 07:05:57.481442 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:05:57.481561 kubelet[2144]: I0813 07:05:57.481456 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5993264d76144ec1969829c940122ab0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5993264d76144ec1969829c940122ab0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:05:57.661437 kubelet[2144]: I0813 07:05:57.661306 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:05:57.661821 kubelet[2144]: E0813 07:05:57.661782 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Aug 13 07:05:57.760930 kubelet[2144]: E0813 07:05:57.760872 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:57.761701 containerd[1462]: time="2025-08-13T07:05:57.761624475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5993264d76144ec1969829c940122ab0,Namespace:kube-system,Attempt:0,}" Aug 13 07:05:57.775841 kubelet[2144]: E0813 07:05:57.775811 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:57.776277 containerd[1462]: time="2025-08-13T07:05:57.776238566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 07:05:57.781694 kubelet[2144]: E0813 07:05:57.781647 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:57.782186 containerd[1462]: time="2025-08-13T07:05:57.782155511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 07:05:57.782657 kubelet[2144]: E0813 07:05:57.782611 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Aug 13 07:05:58.064193 kubelet[2144]: I0813 07:05:58.064043 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:05:58.064437 kubelet[2144]: E0813 07:05:58.064407 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Aug 13 07:05:58.084204 kubelet[2144]: E0813 07:05:58.084170 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:05:58.262799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168355133.mount: Deactivated successfully. Aug 13 07:05:58.270162 containerd[1462]: time="2025-08-13T07:05:58.270106989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:05:58.271104 containerd[1462]: time="2025-08-13T07:05:58.271067390Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:05:58.271963 containerd[1462]: time="2025-08-13T07:05:58.271933023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:05:58.272888 containerd[1462]: time="2025-08-13T07:05:58.272845314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:05:58.273615 containerd[1462]: time="2025-08-13T07:05:58.273580162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:05:58.274512 containerd[1462]: time="2025-08-13T07:05:58.274479498Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:05:58.275333 containerd[1462]: time="2025-08-13T07:05:58.275298253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:05:58.277762 containerd[1462]: time="2025-08-13T07:05:58.277729932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:05:58.279625 containerd[1462]: time="2025-08-13T07:05:58.279593667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.273969ms" Aug 13 07:05:58.280272 containerd[1462]: time="2025-08-13T07:05:58.280229189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.495589ms" Aug 13 07:05:58.280856 containerd[1462]: time="2025-08-13T07:05:58.280818584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.612569ms" Aug 13 07:05:58.364711 kubelet[2144]: E0813 07:05:58.364072 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:05:58.382473 kubelet[2144]: E0813 07:05:58.382389 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:05:58.480206 containerd[1462]: time="2025-08-13T07:05:58.480059110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:05:58.480206 containerd[1462]: time="2025-08-13T07:05:58.480130614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:05:58.480206 containerd[1462]: time="2025-08-13T07:05:58.480168384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:58.480600 containerd[1462]: time="2025-08-13T07:05:58.480282158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:58.481831 containerd[1462]: time="2025-08-13T07:05:58.481686771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:05:58.481831 containerd[1462]: time="2025-08-13T07:05:58.481749860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:05:58.481831 containerd[1462]: time="2025-08-13T07:05:58.481765289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:58.482467 containerd[1462]: time="2025-08-13T07:05:58.482389209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:05:58.482535 containerd[1462]: time="2025-08-13T07:05:58.482484888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:05:58.482595 containerd[1462]: time="2025-08-13T07:05:58.482547626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:58.482756 containerd[1462]: time="2025-08-13T07:05:58.482711994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:58.483482 containerd[1462]: time="2025-08-13T07:05:58.483416294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:58.512836 systemd[1]: Started cri-containerd-59ccabe2a0eeca0d012d5da1edef957f5e8d7fa69c4dcbec02a526a93313a013.scope - libcontainer container 59ccabe2a0eeca0d012d5da1edef957f5e8d7fa69c4dcbec02a526a93313a013. Aug 13 07:05:58.517718 systemd[1]: Started cri-containerd-1b732f859d24f802aee5effd78b1f6bd843606d740cff60a77f6caa442803f28.scope - libcontainer container 1b732f859d24f802aee5effd78b1f6bd843606d740cff60a77f6caa442803f28. Aug 13 07:05:58.519886 systemd[1]: Started cri-containerd-9d1bcbf6ab17f630d12889eaddd36ba0db6f51df015cac6a51db0e461074939d.scope - libcontainer container 9d1bcbf6ab17f630d12889eaddd36ba0db6f51df015cac6a51db0e461074939d. Aug 13 07:05:58.583731 kubelet[2144]: E0813 07:05:58.583631 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Aug 13 07:05:58.606243 containerd[1462]: time="2025-08-13T07:05:58.606034183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b732f859d24f802aee5effd78b1f6bd843606d740cff60a77f6caa442803f28\"" Aug 13 07:05:58.609292 kubelet[2144]: E0813 07:05:58.609040 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:58.616106 containerd[1462]: time="2025-08-13T07:05:58.616019836Z" level=info msg="CreateContainer within sandbox \"1b732f859d24f802aee5effd78b1f6bd843606d740cff60a77f6caa442803f28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:05:58.616330 containerd[1462]: time="2025-08-13T07:05:58.616285925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5993264d76144ec1969829c940122ab0,Namespace:kube-system,Attempt:0,} returns sandbox id \"59ccabe2a0eeca0d012d5da1edef957f5e8d7fa69c4dcbec02a526a93313a013\"" Aug 13 07:05:58.619172 kubelet[2144]: E0813 07:05:58.619104 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:58.620927 containerd[1462]: time="2025-08-13T07:05:58.620880900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d1bcbf6ab17f630d12889eaddd36ba0db6f51df015cac6a51db0e461074939d\"" Aug 13 07:05:58.621702 kubelet[2144]: E0813 07:05:58.621549 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:58.623895 containerd[1462]: time="2025-08-13T07:05:58.623817386Z" level=info msg="CreateContainer within sandbox \"59ccabe2a0eeca0d012d5da1edef957f5e8d7fa69c4dcbec02a526a93313a013\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:05:58.626460 containerd[1462]: time="2025-08-13T07:05:58.626430446Z" level=info msg="CreateContainer within sandbox \"9d1bcbf6ab17f630d12889eaddd36ba0db6f51df015cac6a51db0e461074939d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:05:58.644346 containerd[1462]: time="2025-08-13T07:05:58.644294531Z" level=info msg="CreateContainer within sandbox \"1b732f859d24f802aee5effd78b1f6bd843606d740cff60a77f6caa442803f28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e565b9c6a27952e8f6b0b6fdd368cf1a3071c0f12d034e19ecc45c908481e70\"" Aug 13 07:05:58.644940 containerd[1462]: time="2025-08-13T07:05:58.644900046Z" level=info msg="StartContainer for \"8e565b9c6a27952e8f6b0b6fdd368cf1a3071c0f12d034e19ecc45c908481e70\"" Aug 13 07:05:58.649377 containerd[1462]: time="2025-08-13T07:05:58.649336555Z" level=info msg="CreateContainer within sandbox \"59ccabe2a0eeca0d012d5da1edef957f5e8d7fa69c4dcbec02a526a93313a013\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b02e0a0b3f4317283b4fd9fab5521794994654aaf751992ff80d475e7291bf8d\"" Aug 13 07:05:58.649764 containerd[1462]: time="2025-08-13T07:05:58.649719112Z" level=info msg="StartContainer for \"b02e0a0b3f4317283b4fd9fab5521794994654aaf751992ff80d475e7291bf8d\"" Aug 13 07:05:58.652118 containerd[1462]: time="2025-08-13T07:05:58.652070972Z" level=info msg="CreateContainer within sandbox \"9d1bcbf6ab17f630d12889eaddd36ba0db6f51df015cac6a51db0e461074939d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6877e9dd81787c2740d71d577813f021db5ff7f74a6c6d9f0a0f7a818b7ed455\"" Aug 13 07:05:58.652702 containerd[1462]: time="2025-08-13T07:05:58.652444312Z" level=info msg="StartContainer for \"6877e9dd81787c2740d71d577813f021db5ff7f74a6c6d9f0a0f7a818b7ed455\"" Aug 13 07:05:58.677871 systemd[1]: Started cri-containerd-8e565b9c6a27952e8f6b0b6fdd368cf1a3071c0f12d034e19ecc45c908481e70.scope - libcontainer container 8e565b9c6a27952e8f6b0b6fdd368cf1a3071c0f12d034e19ecc45c908481e70. Aug 13 07:05:58.682395 systemd[1]: Started cri-containerd-6877e9dd81787c2740d71d577813f021db5ff7f74a6c6d9f0a0f7a818b7ed455.scope - libcontainer container 6877e9dd81787c2740d71d577813f021db5ff7f74a6c6d9f0a0f7a818b7ed455. Aug 13 07:05:58.683909 systemd[1]: Started cri-containerd-b02e0a0b3f4317283b4fd9fab5521794994654aaf751992ff80d475e7291bf8d.scope - libcontainer container b02e0a0b3f4317283b4fd9fab5521794994654aaf751992ff80d475e7291bf8d. Aug 13 07:05:58.729092 containerd[1462]: time="2025-08-13T07:05:58.725873501Z" level=info msg="StartContainer for \"8e565b9c6a27952e8f6b0b6fdd368cf1a3071c0f12d034e19ecc45c908481e70\" returns successfully" Aug 13 07:05:58.737149 containerd[1462]: time="2025-08-13T07:05:58.737105501Z" level=info msg="StartContainer for \"b02e0a0b3f4317283b4fd9fab5521794994654aaf751992ff80d475e7291bf8d\" returns successfully" Aug 13 07:05:58.737236 containerd[1462]: time="2025-08-13T07:05:58.737184089Z" level=info msg="StartContainer for \"6877e9dd81787c2740d71d577813f021db5ff7f74a6c6d9f0a0f7a818b7ed455\" returns successfully" Aug 13 07:05:58.866574 kubelet[2144]: I0813 07:05:58.866433 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:05:59.342613 kubelet[2144]: E0813 07:05:59.342376 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:05:59.342613 kubelet[2144]: E0813 07:05:59.342542 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:59.345637 kubelet[2144]: E0813 07:05:59.345603 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:05:59.345726 kubelet[2144]: E0813 07:05:59.345717 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:59.346250 kubelet[2144]: E0813 07:05:59.346216 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:05:59.346333 kubelet[2144]: E0813 07:05:59.346309 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:00.264968 kubelet[2144]: E0813 07:06:00.264906 2144 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:06:00.349699 kubelet[2144]: E0813 07:06:00.349630 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:06:00.350550 kubelet[2144]: E0813 07:06:00.350351 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:06:00.350550 kubelet[2144]: E0813 07:06:00.350475 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:00.350730 kubelet[2144]: E0813 07:06:00.350716 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:00.351694 kubelet[2144]: I0813 07:06:00.351639 2144 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:06:00.351731 kubelet[2144]: E0813 07:06:00.351699 2144 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:06:00.364517 kubelet[2144]: E0813 07:06:00.364481 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:06:00.465292 kubelet[2144]: E0813 07:06:00.465148 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:06:00.581526 kubelet[2144]: I0813 07:06:00.581388 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:00.586697 kubelet[2144]: E0813 07:06:00.586640 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:00.586697 kubelet[2144]: I0813 07:06:00.586670 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:00.588935 kubelet[2144]: E0813 07:06:00.588888 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:00.588935 kubelet[2144]: I0813 07:06:00.588929 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:00.590330 kubelet[2144]: E0813 07:06:00.590301 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:01.169881 kubelet[2144]: I0813 07:06:01.169837 2144 apiserver.go:52] "Watching apiserver" Aug 13 07:06:01.180254 kubelet[2144]: I0813 07:06:01.180179 2144 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:06:01.349474 kubelet[2144]: I0813 07:06:01.349391 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:01.354748 kubelet[2144]: E0813 07:06:01.354718 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:01.683387 kubelet[2144]: I0813 07:06:01.683328 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:01.688350 kubelet[2144]: E0813 07:06:01.688313 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:02.247577 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-7.scope)... Aug 13 07:06:02.247596 systemd[1]: Reloading... Aug 13 07:06:02.328711 zram_generator::config[2471]: No configuration found. Aug 13 07:06:02.351862 kubelet[2144]: E0813 07:06:02.351816 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:02.352318 kubelet[2144]: E0813 07:06:02.352278 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:02.439940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:02.546511 systemd[1]: Reloading finished in 298 ms. Aug 13 07:06:02.601446 kubelet[2144]: I0813 07:06:02.601407 2144 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:02.601691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:02.618143 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:06:02.618533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:02.618604 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 132.6M memory peak, 0B memory swap peak. Aug 13 07:06:02.628031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:02.820517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:02.832168 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:06:02.878322 kubelet[2513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:02.878322 kubelet[2513]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:06:02.878322 kubelet[2513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:02.878829 kubelet[2513]: I0813 07:06:02.878374 2513 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:06:02.885753 kubelet[2513]: I0813 07:06:02.885719 2513 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:06:02.885753 kubelet[2513]: I0813 07:06:02.885741 2513 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:06:02.885944 kubelet[2513]: I0813 07:06:02.885929 2513 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:06:02.887120 kubelet[2513]: I0813 07:06:02.887092 2513 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 07:06:02.889966 kubelet[2513]: I0813 07:06:02.889918 2513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:02.893308 kubelet[2513]: E0813 07:06:02.893258 2513 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:06:02.893362 kubelet[2513]: I0813 07:06:02.893309 2513 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:06:02.898607 kubelet[2513]: I0813 07:06:02.898548 2513 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:06:02.898873 kubelet[2513]: I0813 07:06:02.898828 2513 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:06:02.899010 kubelet[2513]: I0813 07:06:02.898865 2513 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:06:02.899167 kubelet[2513]: I0813 07:06:02.899015 2513 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:06:02.899167 kubelet[2513]: I0813 07:06:02.899025 2513 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:06:02.899167 kubelet[2513]: I0813 07:06:02.899076 2513 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:02.899263 kubelet[2513]: I0813 07:06:02.899247 2513 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:06:02.899290 kubelet[2513]: I0813 07:06:02.899263 2513 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:06:02.899290 kubelet[2513]: I0813 07:06:02.899288 2513 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:06:02.900186 kubelet[2513]: I0813 07:06:02.899304 2513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:06:02.900336 kubelet[2513]: I0813 07:06:02.900297 2513 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:06:02.901090 kubelet[2513]: I0813 07:06:02.901050 2513 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:06:02.907203 kubelet[2513]: I0813 07:06:02.907168 2513 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:06:02.907279 kubelet[2513]: I0813 07:06:02.907243 2513 server.go:1289] "Started kubelet" Aug 13 07:06:02.909223 kubelet[2513]: I0813 07:06:02.909021 2513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:06:02.909223 kubelet[2513]: I0813 07:06:02.909082 2513 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:06:02.910961 kubelet[2513]: I0813 07:06:02.910895 2513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:06:02.911626 kubelet[2513]: I0813 07:06:02.911589 2513 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:06:02.912040 kubelet[2513]: I0813 07:06:02.912013 2513 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:06:02.912193 kubelet[2513]: E0813 07:06:02.912152 2513 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:06:02.912723 kubelet[2513]: I0813 07:06:02.912698 2513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:06:02.916181 kubelet[2513]: I0813 07:06:02.916144 2513 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:06:02.916249 kubelet[2513]: I0813 07:06:02.916225 2513 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:06:02.916499 kubelet[2513]: I0813 07:06:02.916474 2513 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:06:02.916988 kubelet[2513]: I0813 07:06:02.916960 2513 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:06:02.917089 kubelet[2513]: I0813 07:06:02.917068 2513 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:06:02.918656 kubelet[2513]: I0813 07:06:02.918591 2513 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:06:02.930481 kubelet[2513]: I0813 07:06:02.930435 2513 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:06:02.933101 kubelet[2513]: I0813 07:06:02.933077 2513 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:06:02.933155 kubelet[2513]: I0813 07:06:02.933107 2513 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:06:02.933155 kubelet[2513]: I0813 07:06:02.933130 2513 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:06:02.933155 kubelet[2513]: I0813 07:06:02.933137 2513 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:06:02.933231 kubelet[2513]: E0813 07:06:02.933182 2513 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955767 2513 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955785 2513 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955809 2513 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955933 2513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955946 2513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955963 2513 policy_none.go:49] "None policy: Start" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955974 2513 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.955986 2513 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:06:02.956316 kubelet[2513]: I0813 07:06:02.956065 2513 state_mem.go:75] "Updated machine memory state" Aug 13 07:06:02.960220 kubelet[2513]: E0813 07:06:02.960180 2513 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:06:02.960401 kubelet[2513]: I0813 07:06:02.960385 2513 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:06:02.960703 kubelet[2513]: I0813 07:06:02.960402 2513 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:06:02.961055 kubelet[2513]: I0813 07:06:02.961025 2513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:06:02.965692 kubelet[2513]: E0813 07:06:02.963418 2513 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:06:03.034075 kubelet[2513]: I0813 07:06:03.034003 2513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:03.034281 kubelet[2513]: I0813 07:06:03.034240 2513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:03.034454 kubelet[2513]: I0813 07:06:03.034292 2513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:03.039382 kubelet[2513]: E0813 07:06:03.039325 2513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:03.039382 kubelet[2513]: E0813 07:06:03.039360 2513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:03.070829 kubelet[2513]: I0813 07:06:03.070708 2513 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:06:03.076897 kubelet[2513]: I0813 07:06:03.076871 2513 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 07:06:03.076984 kubelet[2513]: I0813 07:06:03.076961 2513 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:06:03.117936 kubelet[2513]: I0813 07:06:03.117888 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5993264d76144ec1969829c940122ab0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5993264d76144ec1969829c940122ab0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:03.117936 kubelet[2513]: I0813 07:06:03.117922 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5993264d76144ec1969829c940122ab0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5993264d76144ec1969829c940122ab0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:03.117936 kubelet[2513]: I0813 07:06:03.117943 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5993264d76144ec1969829c940122ab0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5993264d76144ec1969829c940122ab0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:03.118132 kubelet[2513]: I0813 07:06:03.117961 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:03.118132 kubelet[2513]: I0813 07:06:03.118011 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:03.118132 kubelet[2513]: I0813 07:06:03.118046 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:03.118132 kubelet[2513]: I0813 07:06:03.118076 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:03.118132 kubelet[2513]: I0813 07:06:03.118090 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:03.118250 kubelet[2513]: I0813 07:06:03.118106 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:03.249960 sudo[2556]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:06:03.250352 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:06:03.340192 kubelet[2513]: E0813 07:06:03.340064 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:03.340192 kubelet[2513]: E0813 07:06:03.340093 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:03.340349 kubelet[2513]: E0813 07:06:03.340272 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:03.735421 sudo[2556]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:03.900494 kubelet[2513]: I0813 07:06:03.900434 2513 apiserver.go:52] "Watching apiserver" Aug 13 07:06:03.917027 kubelet[2513]: I0813 07:06:03.916990 2513 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:06:03.944510 kubelet[2513]: I0813 07:06:03.944476 2513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:03.945487 kubelet[2513]: I0813 07:06:03.944945 2513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:03.945487 kubelet[2513]: I0813 07:06:03.945294 2513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:04.197044 kubelet[2513]: E0813 07:06:04.196894 2513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:06:04.197171 kubelet[2513]: E0813 07:06:04.197049 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:04.232840 kubelet[2513]: E0813 07:06:04.232798 2513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:06:04.232981 kubelet[2513]: E0813 07:06:04.232929 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:04.233112 kubelet[2513]: E0813 07:06:04.233059 2513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:06:04.233284 kubelet[2513]: E0813 07:06:04.233261 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:04.234123 kubelet[2513]: I0813 07:06:04.234071 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.234032955 podStartE2EDuration="1.234032955s" podCreationTimestamp="2025-08-13 07:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:06:04.233527605 +0000 UTC m=+1.395389084" watchObservedRunningTime="2025-08-13 07:06:04.234032955 +0000 UTC m=+1.395894434" Aug 13 07:06:04.247561 kubelet[2513]: I0813 07:06:04.247492 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.247473741 podStartE2EDuration="3.247473741s" podCreationTimestamp="2025-08-13 07:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:06:04.247345675 +0000 UTC m=+1.409207154" watchObservedRunningTime="2025-08-13 07:06:04.247473741 +0000 UTC m=+1.409335220" Aug 13 07:06:04.247796 kubelet[2513]: I0813 07:06:04.247633 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.247628798 podStartE2EDuration="3.247628798s" podCreationTimestamp="2025-08-13 07:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:06:04.241199919 +0000 UTC m=+1.403061388" watchObservedRunningTime="2025-08-13 07:06:04.247628798 +0000 UTC m=+1.409490277" Aug 13 07:06:04.947164 kubelet[2513]: E0813 07:06:04.947075 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:04.947164 kubelet[2513]: E0813 07:06:04.947115 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:04.947749 kubelet[2513]: E0813 07:06:04.947380 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:05.367037 sudo[1643]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:05.369246 sshd[1639]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:05.374036 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:43474.service: Deactivated successfully. Aug 13 07:06:05.376225 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:06:05.376471 systemd[1]: session-7.scope: Consumed 6.750s CPU time, 161.8M memory peak, 0B memory swap peak. Aug 13 07:06:05.377093 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:06:05.378106 systemd-logind[1445]: Removed session 7. Aug 13 07:06:06.348929 kubelet[2513]: E0813 07:06:06.348881 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:07.569266 kubelet[2513]: E0813 07:06:07.569206 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:08.009347 kubelet[2513]: E0813 07:06:08.009270 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:08.953221 kubelet[2513]: E0813 07:06:08.953185 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:09.338563 kubelet[2513]: I0813 07:06:09.338389 2513 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:06:09.338891 containerd[1462]: time="2025-08-13T07:06:09.338838593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:06:09.339321 kubelet[2513]: I0813 07:06:09.339117 2513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:06:10.356860 systemd[1]: Created slice kubepods-besteffort-pod1762ef61_e59e_426a_b12a_82523ec4adb4.slice - libcontainer container kubepods-besteffort-pod1762ef61_e59e_426a_b12a_82523ec4adb4.slice. Aug 13 07:06:10.363045 kubelet[2513]: I0813 07:06:10.362968 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd72j\" (UniqueName: \"kubernetes.io/projected/1762ef61-e59e-426a-b12a-82523ec4adb4-kube-api-access-bd72j\") pod \"cilium-operator-6c4d7847fc-dvjz2\" (UID: \"1762ef61-e59e-426a-b12a-82523ec4adb4\") " pod="kube-system/cilium-operator-6c4d7847fc-dvjz2" Aug 13 07:06:10.363045 kubelet[2513]: I0813 07:06:10.363001 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1762ef61-e59e-426a-b12a-82523ec4adb4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dvjz2\" (UID: \"1762ef61-e59e-426a-b12a-82523ec4adb4\") " pod="kube-system/cilium-operator-6c4d7847fc-dvjz2" Aug 13 07:06:10.376744 kubelet[2513]: E0813 07:06:10.376693 2513 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Aug 13 07:06:10.377032 systemd[1]: Created slice kubepods-besteffort-pod6e44da9a_d7b5_46d9_932d_79215dc77939.slice - libcontainer container kubepods-besteffort-pod6e44da9a_d7b5_46d9_932d_79215dc77939.slice. Aug 13 07:06:10.409699 systemd[1]: Created slice kubepods-burstable-pod8cf3670b_b616_4414_8278_dbf26e8ecb68.slice - libcontainer container kubepods-burstable-pod8cf3670b_b616_4414_8278_dbf26e8ecb68.slice. Aug 13 07:06:10.463794 kubelet[2513]: I0813 07:06:10.463692 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e44da9a-d7b5-46d9-932d-79215dc77939-kube-proxy\") pod \"kube-proxy-c6k56\" (UID: \"6e44da9a-d7b5-46d9-932d-79215dc77939\") " pod="kube-system/kube-proxy-c6k56" Aug 13 07:06:10.463794 kubelet[2513]: I0813 07:06:10.463738 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-config-path\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.463794 kubelet[2513]: I0813 07:06:10.463757 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-hubble-tls\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.463794 kubelet[2513]: I0813 07:06:10.463774 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-bpf-maps\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464089 kubelet[2513]: I0813 07:06:10.463864 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-etc-cni-netd\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464089 kubelet[2513]: I0813 07:06:10.463947 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-xtables-lock\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464089 kubelet[2513]: I0813 07:06:10.463975 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hct7v\" (UniqueName: \"kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-kube-api-access-hct7v\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464089 kubelet[2513]: I0813 07:06:10.464002 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e44da9a-d7b5-46d9-932d-79215dc77939-lib-modules\") pod \"kube-proxy-c6k56\" (UID: \"6e44da9a-d7b5-46d9-932d-79215dc77939\") " pod="kube-system/kube-proxy-c6k56" Aug 13 07:06:10.464089 kubelet[2513]: I0813 07:06:10.464044 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-kernel\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464213 kubelet[2513]: I0813 07:06:10.464108 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7j4\" (UniqueName: \"kubernetes.io/projected/6e44da9a-d7b5-46d9-932d-79215dc77939-kube-api-access-rj7j4\") pod \"kube-proxy-c6k56\" (UID: \"6e44da9a-d7b5-46d9-932d-79215dc77939\") " pod="kube-system/kube-proxy-c6k56" Aug 13 07:06:10.464213 kubelet[2513]: I0813 07:06:10.464134 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-lib-modules\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464213 kubelet[2513]: I0813 07:06:10.464153 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cf3670b-b616-4414-8278-dbf26e8ecb68-clustermesh-secrets\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464213 kubelet[2513]: I0813 07:06:10.464173 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-run\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464213 kubelet[2513]: I0813 07:06:10.464195 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-cgroup\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464328 kubelet[2513]: I0813 07:06:10.464216 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-hostproc\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464328 kubelet[2513]: I0813 07:06:10.464246 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cni-path\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.464380 kubelet[2513]: I0813 07:06:10.464327 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e44da9a-d7b5-46d9-932d-79215dc77939-xtables-lock\") pod \"kube-proxy-c6k56\" (UID: \"6e44da9a-d7b5-46d9-932d-79215dc77939\") " pod="kube-system/kube-proxy-c6k56" Aug 13 07:06:10.464380 kubelet[2513]: I0813 07:06:10.464355 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-net\") pod \"cilium-w7d79\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " pod="kube-system/cilium-w7d79" Aug 13 07:06:10.667525 kubelet[2513]: E0813 07:06:10.667458 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:10.668367 containerd[1462]: time="2025-08-13T07:06:10.668259812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dvjz2,Uid:1762ef61-e59e-426a-b12a-82523ec4adb4,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:10.699882 containerd[1462]: time="2025-08-13T07:06:10.699344178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:10.699882 containerd[1462]: time="2025-08-13T07:06:10.699476319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:10.699882 containerd[1462]: time="2025-08-13T07:06:10.699491878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:10.700105 containerd[1462]: time="2025-08-13T07:06:10.699998764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:10.713596 kubelet[2513]: E0813 07:06:10.713519 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:10.714358 containerd[1462]: time="2025-08-13T07:06:10.714151541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7d79,Uid:8cf3670b-b616-4414-8278-dbf26e8ecb68,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:10.724838 systemd[1]: Started cri-containerd-f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c.scope - libcontainer container f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c. Aug 13 07:06:10.743823 containerd[1462]: time="2025-08-13T07:06:10.743511944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:10.743823 containerd[1462]: time="2025-08-13T07:06:10.743607786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:10.743823 containerd[1462]: time="2025-08-13T07:06:10.743620581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:10.743972 containerd[1462]: time="2025-08-13T07:06:10.743869375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:10.766895 systemd[1]: Started cri-containerd-a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b.scope - libcontainer container a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b. Aug 13 07:06:10.768331 containerd[1462]: time="2025-08-13T07:06:10.768161617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dvjz2,Uid:1762ef61-e59e-426a-b12a-82523ec4adb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c\"" Aug 13 07:06:10.769195 kubelet[2513]: E0813 07:06:10.769159 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:10.770450 containerd[1462]: time="2025-08-13T07:06:10.770409218Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:06:10.794908 containerd[1462]: time="2025-08-13T07:06:10.794855133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7d79,Uid:8cf3670b-b616-4414-8278-dbf26e8ecb68,Namespace:kube-system,Attempt:0,} returns sandbox id \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\"" Aug 13 07:06:10.795696 kubelet[2513]: E0813 07:06:10.795657 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:11.565951 kubelet[2513]: E0813 07:06:11.565897 2513 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:06:11.566385 kubelet[2513]: E0813 07:06:11.565996 2513 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e44da9a-d7b5-46d9-932d-79215dc77939-kube-proxy podName:6e44da9a-d7b5-46d9-932d-79215dc77939 nodeName:}" failed. No retries permitted until 2025-08-13 07:06:12.065974409 +0000 UTC m=+9.227835888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6e44da9a-d7b5-46d9-932d-79215dc77939-kube-proxy") pod "kube-proxy-c6k56" (UID: "6e44da9a-d7b5-46d9-932d-79215dc77939") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:06:12.003902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079782720.mount: Deactivated successfully. Aug 13 07:06:12.179568 kubelet[2513]: E0813 07:06:12.179395 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:12.181510 containerd[1462]: time="2025-08-13T07:06:12.181441686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6k56,Uid:6e44da9a-d7b5-46d9-932d-79215dc77939,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:12.212601 containerd[1462]: time="2025-08-13T07:06:12.212228466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:12.212601 containerd[1462]: time="2025-08-13T07:06:12.212283140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:12.212601 containerd[1462]: time="2025-08-13T07:06:12.212297367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:12.212601 containerd[1462]: time="2025-08-13T07:06:12.212374313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:12.233815 systemd[1]: Started cri-containerd-c7c87a927730f537445f75cce3dd54e5001ffe5679769db57d4eb5cdd3991a21.scope - libcontainer container c7c87a927730f537445f75cce3dd54e5001ffe5679769db57d4eb5cdd3991a21. Aug 13 07:06:12.259442 containerd[1462]: time="2025-08-13T07:06:12.259326387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6k56,Uid:6e44da9a-d7b5-46d9-932d-79215dc77939,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7c87a927730f537445f75cce3dd54e5001ffe5679769db57d4eb5cdd3991a21\"" Aug 13 07:06:12.260318 kubelet[2513]: E0813 07:06:12.260293 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:12.265646 containerd[1462]: time="2025-08-13T07:06:12.265607804Z" level=info msg="CreateContainer within sandbox \"c7c87a927730f537445f75cce3dd54e5001ffe5679769db57d4eb5cdd3991a21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:06:12.284134 update_engine[1447]: I20250813 07:06:12.284049 1447 update_attempter.cc:509] Updating boot flags... Aug 13 07:06:12.286304 containerd[1462]: time="2025-08-13T07:06:12.286212965Z" level=info msg="CreateContainer within sandbox \"c7c87a927730f537445f75cce3dd54e5001ffe5679769db57d4eb5cdd3991a21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"462f524f118814825da59ff732bde6f931433d487f296008cd2a9b183ef61bf4\"" Aug 13 07:06:12.286970 containerd[1462]: time="2025-08-13T07:06:12.286909529Z" level=info msg="StartContainer for \"462f524f118814825da59ff732bde6f931433d487f296008cd2a9b183ef61bf4\"" Aug 13 07:06:12.317721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2756) Aug 13 07:06:12.357908 systemd[1]: Started cri-containerd-462f524f118814825da59ff732bde6f931433d487f296008cd2a9b183ef61bf4.scope - libcontainer container 462f524f118814825da59ff732bde6f931433d487f296008cd2a9b183ef61bf4. Aug 13 07:06:12.369793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2755) Aug 13 07:06:12.382536 containerd[1462]: time="2025-08-13T07:06:12.377833942Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:12.385703 containerd[1462]: time="2025-08-13T07:06:12.384306884Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:06:12.385703 containerd[1462]: time="2025-08-13T07:06:12.385361649Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:12.388976 containerd[1462]: time="2025-08-13T07:06:12.388937514Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.618490915s" Aug 13 07:06:12.388976 containerd[1462]: time="2025-08-13T07:06:12.388971489Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:06:12.404088 containerd[1462]: time="2025-08-13T07:06:12.404052671Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:06:12.417719 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2755) Aug 13 07:06:12.494929 containerd[1462]: time="2025-08-13T07:06:12.494879098Z" level=info msg="CreateContainer within sandbox \"f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:06:12.497973 containerd[1462]: time="2025-08-13T07:06:12.496393737Z" level=info msg="StartContainer for \"462f524f118814825da59ff732bde6f931433d487f296008cd2a9b183ef61bf4\" returns successfully" Aug 13 07:06:12.517238 containerd[1462]: time="2025-08-13T07:06:12.517136228Z" level=info msg="CreateContainer within sandbox \"f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\"" Aug 13 07:06:12.517928 containerd[1462]: time="2025-08-13T07:06:12.517892907Z" level=info msg="StartContainer for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\"" Aug 13 07:06:12.546870 systemd[1]: Started cri-containerd-1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b.scope - libcontainer container 1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b. Aug 13 07:06:12.576520 containerd[1462]: time="2025-08-13T07:06:12.576472507Z" level=info msg="StartContainer for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" returns successfully" Aug 13 07:06:12.962418 kubelet[2513]: E0813 07:06:12.962281 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:12.964498 kubelet[2513]: E0813 07:06:12.964469 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:13.006215 kubelet[2513]: I0813 07:06:13.006021 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c6k56" podStartSLOduration=3.00600371 podStartE2EDuration="3.00600371s" podCreationTimestamp="2025-08-13 07:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:06:13.005996436 +0000 UTC m=+10.167857925" watchObservedRunningTime="2025-08-13 07:06:13.00600371 +0000 UTC m=+10.167865189" Aug 13 07:06:13.966946 kubelet[2513]: E0813 07:06:13.966892 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:16.354271 kubelet[2513]: E0813 07:06:16.354224 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:16.364216 kubelet[2513]: I0813 07:06:16.364157 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dvjz2" podStartSLOduration=4.74048905 podStartE2EDuration="6.364135306s" podCreationTimestamp="2025-08-13 07:06:10 +0000 UTC" firstStartedPulling="2025-08-13 07:06:10.769964631 +0000 UTC m=+7.931826110" lastFinishedPulling="2025-08-13 07:06:12.393610887 +0000 UTC m=+9.555472366" observedRunningTime="2025-08-13 07:06:13.014512798 +0000 UTC m=+10.176374277" watchObservedRunningTime="2025-08-13 07:06:16.364135306 +0000 UTC m=+13.525996785" Aug 13 07:06:17.573903 kubelet[2513]: E0813 07:06:17.573856 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:24.240729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870460757.mount: Deactivated successfully. Aug 13 07:06:27.539184 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:33838.service - OpenSSH per-connection server daemon (10.0.0.1:33838). Aug 13 07:06:27.739634 sshd[2986]: Accepted publickey for core from 10.0.0.1 port 33838 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:27.741402 sshd[2986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:27.745657 systemd-logind[1445]: New session 8 of user core. Aug 13 07:06:27.751890 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:06:27.766909 containerd[1462]: time="2025-08-13T07:06:27.766859093Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:27.767669 containerd[1462]: time="2025-08-13T07:06:27.767615980Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:06:27.768812 containerd[1462]: time="2025-08-13T07:06:27.768781046Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:27.770270 containerd[1462]: time="2025-08-13T07:06:27.770234365Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.365768809s" Aug 13 07:06:27.770270 containerd[1462]: time="2025-08-13T07:06:27.770267367Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:06:27.776879 containerd[1462]: time="2025-08-13T07:06:27.776842399Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:06:27.790302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442337082.mount: Deactivated successfully. Aug 13 07:06:27.793294 containerd[1462]: time="2025-08-13T07:06:27.793251716Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\"" Aug 13 07:06:27.794654 containerd[1462]: time="2025-08-13T07:06:27.793840806Z" level=info msg="StartContainer for \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\"" Aug 13 07:06:27.836852 systemd[1]: Started cri-containerd-63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5.scope - libcontainer container 63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5. Aug 13 07:06:27.874146 containerd[1462]: time="2025-08-13T07:06:27.871388359Z" level=info msg="StartContainer for \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\" returns successfully" Aug 13 07:06:27.882398 systemd[1]: cri-containerd-63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5.scope: Deactivated successfully. Aug 13 07:06:27.891237 sshd[2986]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:27.895553 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:33838.service: Deactivated successfully. Aug 13 07:06:27.897957 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:06:27.898669 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:06:27.899723 systemd-logind[1445]: Removed session 8. Aug 13 07:06:28.062233 kubelet[2513]: E0813 07:06:28.062088 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:28.398821 containerd[1462]: time="2025-08-13T07:06:28.398729263Z" level=info msg="shim disconnected" id=63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5 namespace=k8s.io Aug 13 07:06:28.398821 containerd[1462]: time="2025-08-13T07:06:28.398804565Z" level=warning msg="cleaning up after shim disconnected" id=63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5 namespace=k8s.io Aug 13 07:06:28.398821 containerd[1462]: time="2025-08-13T07:06:28.398821366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:06:28.785859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5-rootfs.mount: Deactivated successfully. Aug 13 07:06:29.066288 kubelet[2513]: E0813 07:06:29.066153 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:29.075042 containerd[1462]: time="2025-08-13T07:06:29.074912758Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:06:29.094189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381378315.mount: Deactivated successfully. Aug 13 07:06:29.095554 containerd[1462]: time="2025-08-13T07:06:29.095509112Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\"" Aug 13 07:06:29.096051 containerd[1462]: time="2025-08-13T07:06:29.096022840Z" level=info msg="StartContainer for \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\"" Aug 13 07:06:29.128856 systemd[1]: Started cri-containerd-1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba.scope - libcontainer container 1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba. Aug 13 07:06:29.159915 containerd[1462]: time="2025-08-13T07:06:29.158400920Z" level=info msg="StartContainer for \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\" returns successfully" Aug 13 07:06:29.172908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:06:29.173277 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:29.173380 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:06:29.181148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:06:29.181420 systemd[1]: cri-containerd-1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba.scope: Deactivated successfully. Aug 13 07:06:29.209409 containerd[1462]: time="2025-08-13T07:06:29.209349883Z" level=info msg="shim disconnected" id=1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba namespace=k8s.io Aug 13 07:06:29.209409 containerd[1462]: time="2025-08-13T07:06:29.209399186Z" level=warning msg="cleaning up after shim disconnected" id=1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba namespace=k8s.io Aug 13 07:06:29.209409 containerd[1462]: time="2025-08-13T07:06:29.209407321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:06:29.214527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:29.785983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba-rootfs.mount: Deactivated successfully. Aug 13 07:06:30.069404 kubelet[2513]: E0813 07:06:30.069159 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:30.076754 containerd[1462]: time="2025-08-13T07:06:30.076646192Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:06:30.100502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998106073.mount: Deactivated successfully. Aug 13 07:06:30.107930 containerd[1462]: time="2025-08-13T07:06:30.107884813Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\"" Aug 13 07:06:30.108457 containerd[1462]: time="2025-08-13T07:06:30.108385656Z" level=info msg="StartContainer for \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\"" Aug 13 07:06:30.152934 systemd[1]: Started cri-containerd-758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691.scope - libcontainer container 758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691. Aug 13 07:06:30.189793 containerd[1462]: time="2025-08-13T07:06:30.189705960Z" level=info msg="StartContainer for \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\" returns successfully" Aug 13 07:06:30.190279 systemd[1]: cri-containerd-758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691.scope: Deactivated successfully. Aug 13 07:06:30.219361 containerd[1462]: time="2025-08-13T07:06:30.219283244Z" level=info msg="shim disconnected" id=758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691 namespace=k8s.io Aug 13 07:06:30.219361 containerd[1462]: time="2025-08-13T07:06:30.219352165Z" level=warning msg="cleaning up after shim disconnected" id=758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691 namespace=k8s.io Aug 13 07:06:30.219361 containerd[1462]: time="2025-08-13T07:06:30.219364327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:06:30.786148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691-rootfs.mount: Deactivated successfully. Aug 13 07:06:31.072894 kubelet[2513]: E0813 07:06:31.072736 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:31.247432 containerd[1462]: time="2025-08-13T07:06:31.247366747Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:06:31.312399 containerd[1462]: time="2025-08-13T07:06:31.312342514Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\"" Aug 13 07:06:31.312868 containerd[1462]: time="2025-08-13T07:06:31.312836434Z" level=info msg="StartContainer for \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\"" Aug 13 07:06:31.347922 systemd[1]: Started cri-containerd-2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9.scope - libcontainer container 2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9. Aug 13 07:06:31.374297 systemd[1]: cri-containerd-2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9.scope: Deactivated successfully. Aug 13 07:06:31.539742 containerd[1462]: time="2025-08-13T07:06:31.539658834Z" level=info msg="StartContainer for \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\" returns successfully" Aug 13 07:06:31.653741 containerd[1462]: time="2025-08-13T07:06:31.653664798Z" level=info msg="shim disconnected" id=2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9 namespace=k8s.io Aug 13 07:06:31.653741 containerd[1462]: time="2025-08-13T07:06:31.653740141Z" level=warning msg="cleaning up after shim disconnected" id=2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9 namespace=k8s.io Aug 13 07:06:31.654000 containerd[1462]: time="2025-08-13T07:06:31.653748747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:06:31.786356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9-rootfs.mount: Deactivated successfully. Aug 13 07:06:32.078478 kubelet[2513]: E0813 07:06:32.078311 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:32.086039 containerd[1462]: time="2025-08-13T07:06:32.085979765Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:06:32.104974 containerd[1462]: time="2025-08-13T07:06:32.104914712Z" level=info msg="CreateContainer within sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\"" Aug 13 07:06:32.105596 containerd[1462]: time="2025-08-13T07:06:32.105558703Z" level=info msg="StartContainer for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\"" Aug 13 07:06:32.148887 systemd[1]: Started cri-containerd-561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282.scope - libcontainer container 561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282. Aug 13 07:06:32.182108 containerd[1462]: time="2025-08-13T07:06:32.182060641Z" level=info msg="StartContainer for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" returns successfully" Aug 13 07:06:32.320212 kubelet[2513]: I0813 07:06:32.320174 2513 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:06:32.359584 systemd[1]: Created slice kubepods-burstable-pode61db78c_483d_4cde_a1f5_bd29dd4e1813.slice - libcontainer container kubepods-burstable-pode61db78c_483d_4cde_a1f5_bd29dd4e1813.slice. Aug 13 07:06:32.369616 systemd[1]: Created slice kubepods-burstable-podb2c0941d_3b71_4686_a73b_502c531de984.slice - libcontainer container kubepods-burstable-podb2c0941d_3b71_4686_a73b_502c531de984.slice. Aug 13 07:06:32.418291 kubelet[2513]: I0813 07:06:32.418250 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjtd7\" (UniqueName: \"kubernetes.io/projected/b2c0941d-3b71-4686-a73b-502c531de984-kube-api-access-cjtd7\") pod \"coredns-674b8bbfcf-srd6v\" (UID: \"b2c0941d-3b71-4686-a73b-502c531de984\") " pod="kube-system/coredns-674b8bbfcf-srd6v" Aug 13 07:06:32.418291 kubelet[2513]: I0813 07:06:32.418283 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e61db78c-483d-4cde-a1f5-bd29dd4e1813-config-volume\") pod \"coredns-674b8bbfcf-69gnm\" (UID: \"e61db78c-483d-4cde-a1f5-bd29dd4e1813\") " pod="kube-system/coredns-674b8bbfcf-69gnm" Aug 13 07:06:32.418459 kubelet[2513]: I0813 07:06:32.418301 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwfb\" (UniqueName: \"kubernetes.io/projected/e61db78c-483d-4cde-a1f5-bd29dd4e1813-kube-api-access-ntwfb\") pod \"coredns-674b8bbfcf-69gnm\" (UID: \"e61db78c-483d-4cde-a1f5-bd29dd4e1813\") " pod="kube-system/coredns-674b8bbfcf-69gnm" Aug 13 07:06:32.418459 kubelet[2513]: I0813 07:06:32.418320 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c0941d-3b71-4686-a73b-502c531de984-config-volume\") pod \"coredns-674b8bbfcf-srd6v\" (UID: \"b2c0941d-3b71-4686-a73b-502c531de984\") " pod="kube-system/coredns-674b8bbfcf-srd6v" Aug 13 07:06:32.666967 kubelet[2513]: E0813 07:06:32.666925 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:32.667590 containerd[1462]: time="2025-08-13T07:06:32.667552797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-69gnm,Uid:e61db78c-483d-4cde-a1f5-bd29dd4e1813,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:32.674117 kubelet[2513]: E0813 07:06:32.674082 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:32.676887 containerd[1462]: time="2025-08-13T07:06:32.676823163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-srd6v,Uid:b2c0941d-3b71-4686-a73b-502c531de984,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:32.910667 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:48626.service - OpenSSH per-connection server daemon (10.0.0.1:48626). Aug 13 07:06:32.949181 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 48626 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:32.950831 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:32.955353 systemd-logind[1445]: New session 9 of user core. Aug 13 07:06:32.966892 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:06:33.082564 kubelet[2513]: E0813 07:06:33.082514 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:33.156776 kubelet[2513]: I0813 07:06:33.156699 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w7d79" podStartSLOduration=6.181574363 podStartE2EDuration="23.156660318s" podCreationTimestamp="2025-08-13 07:06:10 +0000 UTC" firstStartedPulling="2025-08-13 07:06:10.796142875 +0000 UTC m=+7.958004354" lastFinishedPulling="2025-08-13 07:06:27.771228829 +0000 UTC m=+24.933090309" observedRunningTime="2025-08-13 07:06:33.155058144 +0000 UTC m=+30.316919623" watchObservedRunningTime="2025-08-13 07:06:33.156660318 +0000 UTC m=+30.318521797" Aug 13 07:06:33.161048 sshd[3387]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.166554 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:48626.service: Deactivated successfully. Aug 13 07:06:33.168571 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:06:33.169379 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:06:33.170506 systemd-logind[1445]: Removed session 9. Aug 13 07:06:34.084785 kubelet[2513]: E0813 07:06:34.084731 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:35.086412 kubelet[2513]: E0813 07:06:35.086348 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:38.173161 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:48636.service - OpenSSH per-connection server daemon (10.0.0.1:48636). Aug 13 07:06:38.209228 sshd[3405]: Accepted publickey for core from 10.0.0.1 port 48636 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:38.211375 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:38.216340 systemd-logind[1445]: New session 10 of user core. Aug 13 07:06:38.228906 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:06:38.344764 sshd[3405]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:38.348888 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:48636.service: Deactivated successfully. Aug 13 07:06:38.350987 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:06:38.351620 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:06:38.352550 systemd-logind[1445]: Removed session 10. Aug 13 07:06:43.355553 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:45166.service - OpenSSH per-connection server daemon (10.0.0.1:45166). Aug 13 07:06:43.389386 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 45166 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:43.391168 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:43.395341 systemd-logind[1445]: New session 11 of user core. Aug 13 07:06:43.404805 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:06:43.512693 sshd[3423]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:43.525716 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:45166.service: Deactivated successfully. Aug 13 07:06:43.527648 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:06:43.529480 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:06:43.537922 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:45172.service - OpenSSH per-connection server daemon (10.0.0.1:45172). Aug 13 07:06:43.538786 systemd-logind[1445]: Removed session 11. Aug 13 07:06:43.567699 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 45172 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:43.569384 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:43.573234 systemd-logind[1445]: New session 12 of user core. Aug 13 07:06:43.586801 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:06:43.728956 sshd[3438]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:43.740927 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:45172.service: Deactivated successfully. Aug 13 07:06:43.742825 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:06:43.746799 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:06:43.758224 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:45178.service - OpenSSH per-connection server daemon (10.0.0.1:45178). Aug 13 07:06:43.759131 systemd-logind[1445]: Removed session 12. Aug 13 07:06:43.784356 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 45178 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:43.785949 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:43.789919 systemd-logind[1445]: New session 13 of user core. Aug 13 07:06:43.800808 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:06:43.908066 sshd[3450]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:43.912357 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:45178.service: Deactivated successfully. Aug 13 07:06:43.914548 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:06:43.915480 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:06:43.916409 systemd-logind[1445]: Removed session 13. Aug 13 07:06:48.920054 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). Aug 13 07:06:48.955175 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:48.957121 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:48.961044 systemd-logind[1445]: New session 14 of user core. Aug 13 07:06:48.975820 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:06:49.094192 sshd[3465]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:49.098872 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:45184.service: Deactivated successfully. Aug 13 07:06:49.101185 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:06:49.101948 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:06:49.103012 systemd-logind[1445]: Removed session 14. Aug 13 07:06:52.220072 systemd-networkd[1391]: cilium_host: Link UP Aug 13 07:06:52.220247 systemd-networkd[1391]: cilium_net: Link UP Aug 13 07:06:52.220444 systemd-networkd[1391]: cilium_net: Gained carrier Aug 13 07:06:52.220631 systemd-networkd[1391]: cilium_host: Gained carrier Aug 13 07:06:52.337383 systemd-networkd[1391]: cilium_vxlan: Link UP Aug 13 07:06:52.337397 systemd-networkd[1391]: cilium_vxlan: Gained carrier Aug 13 07:06:52.586709 kernel: NET: Registered PF_ALG protocol family Aug 13 07:06:53.090879 systemd-networkd[1391]: cilium_host: Gained IPv6LL Aug 13 07:06:53.217913 systemd-networkd[1391]: cilium_net: Gained IPv6LL Aug 13 07:06:53.280154 systemd-networkd[1391]: lxc_health: Link UP Aug 13 07:06:53.290436 systemd-networkd[1391]: lxc_health: Gained carrier Aug 13 07:06:53.537898 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Aug 13 07:06:53.751292 systemd-networkd[1391]: lxc32c0c6d06dcf: Link UP Aug 13 07:06:53.762001 kernel: eth0: renamed from tmp6f15e Aug 13 07:06:53.766610 systemd-networkd[1391]: lxc32c0c6d06dcf: Gained carrier Aug 13 07:06:53.774538 systemd-networkd[1391]: lxc4b3c6d367d45: Link UP Aug 13 07:06:53.782713 kernel: eth0: renamed from tmp24d1b Aug 13 07:06:53.795505 systemd-networkd[1391]: lxc4b3c6d367d45: Gained carrier Aug 13 07:06:54.108026 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:60236.service - OpenSSH per-connection server daemon (10.0.0.1:60236). Aug 13 07:06:54.151858 sshd[3852]: Accepted publickey for core from 10.0.0.1 port 60236 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:54.153528 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:54.157928 systemd-logind[1445]: New session 15 of user core. Aug 13 07:06:54.162816 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:06:54.279562 sshd[3852]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:54.283932 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:60236.service: Deactivated successfully. Aug 13 07:06:54.285896 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:06:54.286542 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:06:54.287445 systemd-logind[1445]: Removed session 15. Aug 13 07:06:54.689876 systemd-networkd[1391]: lxc_health: Gained IPv6LL Aug 13 07:06:54.715494 kubelet[2513]: E0813 07:06:54.715453 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:54.881919 systemd-networkd[1391]: lxc4b3c6d367d45: Gained IPv6LL Aug 13 07:06:55.073868 systemd-networkd[1391]: lxc32c0c6d06dcf: Gained IPv6LL Aug 13 07:06:55.127361 kubelet[2513]: E0813 07:06:55.127337 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:56.129643 kubelet[2513]: E0813 07:06:56.129594 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:57.112058 containerd[1462]: time="2025-08-13T07:06:57.111952004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:57.112058 containerd[1462]: time="2025-08-13T07:06:57.112004442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:57.113159 containerd[1462]: time="2025-08-13T07:06:57.112659842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:57.113898 containerd[1462]: time="2025-08-13T07:06:57.113827524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:57.118666 containerd[1462]: time="2025-08-13T07:06:57.118516903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:57.118666 containerd[1462]: time="2025-08-13T07:06:57.118578078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:57.118666 containerd[1462]: time="2025-08-13T07:06:57.118589690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:57.120332 containerd[1462]: time="2025-08-13T07:06:57.118697612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:57.149504 systemd[1]: Started cri-containerd-24d1bd52e5a371e7ca07e50274e5d8f387f05fa1177ed33c381a23114dc3e983.scope - libcontainer container 24d1bd52e5a371e7ca07e50274e5d8f387f05fa1177ed33c381a23114dc3e983. Aug 13 07:06:57.151649 systemd[1]: Started cri-containerd-6f15e8a5df94904b532661a301f26586466c8550e541d150abfa13d9d168d221.scope - libcontainer container 6f15e8a5df94904b532661a301f26586466c8550e541d150abfa13d9d168d221. Aug 13 07:06:57.165875 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:06:57.167302 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:06:57.193402 containerd[1462]: time="2025-08-13T07:06:57.193294375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-69gnm,Uid:e61db78c-483d-4cde-a1f5-bd29dd4e1813,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f15e8a5df94904b532661a301f26586466c8550e541d150abfa13d9d168d221\"" Aug 13 07:06:57.194491 kubelet[2513]: E0813 07:06:57.194439 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:57.200948 containerd[1462]: time="2025-08-13T07:06:57.200163775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-srd6v,Uid:b2c0941d-3b71-4686-a73b-502c531de984,Namespace:kube-system,Attempt:0,} returns sandbox id \"24d1bd52e5a371e7ca07e50274e5d8f387f05fa1177ed33c381a23114dc3e983\"" Aug 13 07:06:57.201069 kubelet[2513]: E0813 07:06:57.201031 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:57.210003 containerd[1462]: time="2025-08-13T07:06:57.209967140Z" level=info msg="CreateContainer within sandbox \"6f15e8a5df94904b532661a301f26586466c8550e541d150abfa13d9d168d221\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:06:57.212494 containerd[1462]: time="2025-08-13T07:06:57.212467852Z" level=info msg="CreateContainer within sandbox \"24d1bd52e5a371e7ca07e50274e5d8f387f05fa1177ed33c381a23114dc3e983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:06:57.228458 containerd[1462]: time="2025-08-13T07:06:57.228408083Z" level=info msg="CreateContainer within sandbox \"24d1bd52e5a371e7ca07e50274e5d8f387f05fa1177ed33c381a23114dc3e983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4921c3cd2fea0b7488ce04b3a820699ee3acd501e75552903fa8fdaa2d1689d\"" Aug 13 07:06:57.229005 containerd[1462]: time="2025-08-13T07:06:57.228978062Z" level=info msg="StartContainer for \"f4921c3cd2fea0b7488ce04b3a820699ee3acd501e75552903fa8fdaa2d1689d\"" Aug 13 07:06:57.236370 containerd[1462]: time="2025-08-13T07:06:57.236323054Z" level=info msg="CreateContainer within sandbox \"6f15e8a5df94904b532661a301f26586466c8550e541d150abfa13d9d168d221\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"596d645f9b88064e1e81ee1f073063c41e183e5602fcfcc9caebb2abbdb2d048\"" Aug 13 07:06:57.237194 containerd[1462]: time="2025-08-13T07:06:57.237169022Z" level=info msg="StartContainer for \"596d645f9b88064e1e81ee1f073063c41e183e5602fcfcc9caebb2abbdb2d048\"" Aug 13 07:06:57.256837 systemd[1]: Started cri-containerd-f4921c3cd2fea0b7488ce04b3a820699ee3acd501e75552903fa8fdaa2d1689d.scope - libcontainer container f4921c3cd2fea0b7488ce04b3a820699ee3acd501e75552903fa8fdaa2d1689d. Aug 13 07:06:57.260589 systemd[1]: Started cri-containerd-596d645f9b88064e1e81ee1f073063c41e183e5602fcfcc9caebb2abbdb2d048.scope - libcontainer container 596d645f9b88064e1e81ee1f073063c41e183e5602fcfcc9caebb2abbdb2d048. Aug 13 07:06:57.291415 containerd[1462]: time="2025-08-13T07:06:57.291370495Z" level=info msg="StartContainer for \"f4921c3cd2fea0b7488ce04b3a820699ee3acd501e75552903fa8fdaa2d1689d\" returns successfully" Aug 13 07:06:57.291415 containerd[1462]: time="2025-08-13T07:06:57.291398959Z" level=info msg="StartContainer for \"596d645f9b88064e1e81ee1f073063c41e183e5602fcfcc9caebb2abbdb2d048\" returns successfully" Aug 13 07:06:58.120344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841748938.mount: Deactivated successfully. Aug 13 07:06:58.134100 kubelet[2513]: E0813 07:06:58.133708 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:58.135030 kubelet[2513]: E0813 07:06:58.135009 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:58.168176 kubelet[2513]: I0813 07:06:58.168104 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-srd6v" podStartSLOduration=48.168080789 podStartE2EDuration="48.168080789s" podCreationTimestamp="2025-08-13 07:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:06:58.153489951 +0000 UTC m=+55.315351430" watchObservedRunningTime="2025-08-13 07:06:58.168080789 +0000 UTC m=+55.329942268" Aug 13 07:06:58.181598 kubelet[2513]: I0813 07:06:58.180974 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-69gnm" podStartSLOduration=48.180954373 podStartE2EDuration="48.180954373s" podCreationTimestamp="2025-08-13 07:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:06:58.180413248 +0000 UTC m=+55.342274737" watchObservedRunningTime="2025-08-13 07:06:58.180954373 +0000 UTC m=+55.342815852" Aug 13 07:06:59.136841 kubelet[2513]: E0813 07:06:59.136789 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:59.137371 kubelet[2513]: E0813 07:06:59.136866 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:06:59.293036 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Aug 13 07:06:59.329908 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:06:59.331909 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:59.336208 systemd-logind[1445]: New session 16 of user core. Aug 13 07:06:59.343839 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:06:59.546632 sshd[4047]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:59.550520 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:60250.service: Deactivated successfully. Aug 13 07:06:59.552430 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:06:59.553012 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:06:59.553947 systemd-logind[1445]: Removed session 16. Aug 13 07:07:00.138825 kubelet[2513]: E0813 07:07:00.138774 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:00.139401 kubelet[2513]: E0813 07:07:00.138783 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:04.562071 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:54052.service - OpenSSH per-connection server daemon (10.0.0.1:54052). Aug 13 07:07:04.594872 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 54052 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:04.596639 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:04.600885 systemd-logind[1445]: New session 17 of user core. Aug 13 07:07:04.619908 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:07:04.742440 sshd[4065]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:04.751878 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:54052.service: Deactivated successfully. Aug 13 07:07:04.754440 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:07:04.755236 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:07:04.757445 systemd-logind[1445]: Removed session 17. Aug 13 07:07:04.766003 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:54056.service - OpenSSH per-connection server daemon (10.0.0.1:54056). Aug 13 07:07:04.792439 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 54056 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:04.794323 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:04.798607 systemd-logind[1445]: New session 18 of user core. Aug 13 07:07:04.808831 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:07:05.102966 sshd[4079]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:05.115355 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:54056.service: Deactivated successfully. Aug 13 07:07:05.117884 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:07:05.120054 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:07:05.124955 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:54072.service - OpenSSH per-connection server daemon (10.0.0.1:54072). Aug 13 07:07:05.126096 systemd-logind[1445]: Removed session 18. Aug 13 07:07:05.162011 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 54072 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:05.163983 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:05.168986 systemd-logind[1445]: New session 19 of user core. Aug 13 07:07:05.179869 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:07:05.806353 sshd[4092]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:05.817444 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:54072.service: Deactivated successfully. Aug 13 07:07:05.823990 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:07:05.826321 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:07:05.835210 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Aug 13 07:07:05.837541 systemd-logind[1445]: Removed session 19. Aug 13 07:07:05.861883 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:05.863546 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:05.868334 systemd-logind[1445]: New session 20 of user core. Aug 13 07:07:05.878076 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:07:06.716200 sshd[4113]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:06.726965 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:54082.service: Deactivated successfully. Aug 13 07:07:06.729058 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:07:06.730431 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:07:06.740118 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Aug 13 07:07:06.740746 systemd-logind[1445]: Removed session 20. Aug 13 07:07:06.769692 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:06.771827 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:06.776454 systemd-logind[1445]: New session 21 of user core. Aug 13 07:07:06.787863 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:07:07.260650 sshd[4125]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:07.265486 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:54096.service: Deactivated successfully. Aug 13 07:07:07.268282 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:07:07.269002 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:07:07.270227 systemd-logind[1445]: Removed session 21. Aug 13 07:07:12.277710 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:57960.service - OpenSSH per-connection server daemon (10.0.0.1:57960). Aug 13 07:07:12.320653 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 57960 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:12.322941 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:12.327404 systemd-logind[1445]: New session 22 of user core. Aug 13 07:07:12.336826 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:07:12.454936 sshd[4139]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:12.458708 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:57960.service: Deactivated successfully. Aug 13 07:07:12.460552 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:07:12.461342 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:07:12.462278 systemd-logind[1445]: Removed session 22. Aug 13 07:07:17.472871 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:57964.service - OpenSSH per-connection server daemon (10.0.0.1:57964). Aug 13 07:07:17.525490 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 57964 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:17.527761 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:17.536298 systemd-logind[1445]: New session 23 of user core. Aug 13 07:07:17.545901 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:07:17.714867 sshd[4158]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:17.721662 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:57964.service: Deactivated successfully. Aug 13 07:07:17.727394 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:07:17.729209 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:07:17.730845 systemd-logind[1445]: Removed session 23. Aug 13 07:07:19.934538 kubelet[2513]: E0813 07:07:19.934432 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:22.732885 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:40118.service - OpenSSH per-connection server daemon (10.0.0.1:40118). Aug 13 07:07:22.775390 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 40118 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:22.777184 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:22.781654 systemd-logind[1445]: New session 24 of user core. Aug 13 07:07:22.791822 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:07:22.909168 sshd[4173]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:22.922109 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:40118.service: Deactivated successfully. Aug 13 07:07:22.924350 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:07:22.926196 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:07:22.935975 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:40122.service - OpenSSH per-connection server daemon (10.0.0.1:40122). Aug 13 07:07:22.936922 systemd-logind[1445]: Removed session 24. Aug 13 07:07:22.966147 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 40122 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:22.967782 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:22.972145 systemd-logind[1445]: New session 25 of user core. Aug 13 07:07:22.982803 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:07:24.358822 containerd[1462]: time="2025-08-13T07:07:24.358685003Z" level=info msg="StopContainer for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" with timeout 30 (s)" Aug 13 07:07:24.360192 containerd[1462]: time="2025-08-13T07:07:24.360162569Z" level=info msg="Stop container \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" with signal terminated" Aug 13 07:07:24.384069 systemd[1]: cri-containerd-1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b.scope: Deactivated successfully. Aug 13 07:07:24.410626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b-rootfs.mount: Deactivated successfully. Aug 13 07:07:24.411204 containerd[1462]: time="2025-08-13T07:07:24.411154430Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:07:24.412777 containerd[1462]: time="2025-08-13T07:07:24.412746465Z" level=info msg="StopContainer for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" with timeout 2 (s)" Aug 13 07:07:24.413091 containerd[1462]: time="2025-08-13T07:07:24.413054924Z" level=info msg="Stop container \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" with signal terminated" Aug 13 07:07:24.420892 containerd[1462]: time="2025-08-13T07:07:24.420826046Z" level=info msg="shim disconnected" id=1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b namespace=k8s.io Aug 13 07:07:24.420892 containerd[1462]: time="2025-08-13T07:07:24.420886772Z" level=warning msg="cleaning up after shim disconnected" id=1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b namespace=k8s.io Aug 13 07:07:24.420892 containerd[1462]: time="2025-08-13T07:07:24.420895628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:24.421283 systemd-networkd[1391]: lxc_health: Link DOWN Aug 13 07:07:24.421298 systemd-networkd[1391]: lxc_health: Lost carrier Aug 13 07:07:24.447029 containerd[1462]: time="2025-08-13T07:07:24.446970862Z" level=info msg="StopContainer for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" returns successfully" Aug 13 07:07:24.447751 containerd[1462]: time="2025-08-13T07:07:24.447707801Z" level=info msg="StopPodSandbox for \"f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c\"" Aug 13 07:07:24.447751 containerd[1462]: time="2025-08-13T07:07:24.447753429Z" level=info msg="Container to stop \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:07:24.450388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c-shm.mount: Deactivated successfully. Aug 13 07:07:24.451453 systemd[1]: cri-containerd-561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282.scope: Deactivated successfully. Aug 13 07:07:24.452793 systemd[1]: cri-containerd-561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282.scope: Consumed 6.973s CPU time. Aug 13 07:07:24.464250 systemd[1]: cri-containerd-f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c.scope: Deactivated successfully. Aug 13 07:07:24.478317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282-rootfs.mount: Deactivated successfully. Aug 13 07:07:24.487151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c-rootfs.mount: Deactivated successfully. Aug 13 07:07:24.488126 containerd[1462]: time="2025-08-13T07:07:24.488053785Z" level=info msg="shim disconnected" id=561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282 namespace=k8s.io Aug 13 07:07:24.488284 containerd[1462]: time="2025-08-13T07:07:24.488131213Z" level=warning msg="cleaning up after shim disconnected" id=561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282 namespace=k8s.io Aug 13 07:07:24.488284 containerd[1462]: time="2025-08-13T07:07:24.488145511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:24.488499 containerd[1462]: time="2025-08-13T07:07:24.488403444Z" level=info msg="shim disconnected" id=f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c namespace=k8s.io Aug 13 07:07:24.488499 containerd[1462]: time="2025-08-13T07:07:24.488464872Z" level=warning msg="cleaning up after shim disconnected" id=f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c namespace=k8s.io Aug 13 07:07:24.488499 containerd[1462]: time="2025-08-13T07:07:24.488474089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:24.506315 containerd[1462]: time="2025-08-13T07:07:24.506253308Z" level=info msg="StopContainer for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" returns successfully" Aug 13 07:07:24.506811 containerd[1462]: time="2025-08-13T07:07:24.506777390Z" level=info msg="StopPodSandbox for \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\"" Aug 13 07:07:24.506877 containerd[1462]: time="2025-08-13T07:07:24.506809431Z" level=info msg="Container to stop \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:07:24.506877 containerd[1462]: time="2025-08-13T07:07:24.506829751Z" level=info msg="Container to stop \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:07:24.506877 containerd[1462]: time="2025-08-13T07:07:24.506838667Z" level=info msg="Container to stop \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:07:24.506877 containerd[1462]: time="2025-08-13T07:07:24.506850951Z" level=info msg="Container to stop \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:07:24.506877 containerd[1462]: time="2025-08-13T07:07:24.506860058Z" level=info msg="Container to stop \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:07:24.513645 systemd[1]: cri-containerd-a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b.scope: Deactivated successfully. Aug 13 07:07:24.513815 containerd[1462]: time="2025-08-13T07:07:24.513744062Z" level=info msg="TearDown network for sandbox \"f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c\" successfully" Aug 13 07:07:24.513815 containerd[1462]: time="2025-08-13T07:07:24.513767407Z" level=info msg="StopPodSandbox for \"f4857f23f5364ec5a8d5418c6a2f073b4ed9a29db56edda204f2ee5301bd5c2c\" returns successfully" Aug 13 07:07:24.549941 containerd[1462]: time="2025-08-13T07:07:24.549861310Z" level=info msg="shim disconnected" id=a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b namespace=k8s.io Aug 13 07:07:24.549941 containerd[1462]: time="2025-08-13T07:07:24.549927427Z" level=warning msg="cleaning up after shim disconnected" id=a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b namespace=k8s.io Aug 13 07:07:24.549941 containerd[1462]: time="2025-08-13T07:07:24.549936243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:24.582459 containerd[1462]: time="2025-08-13T07:07:24.582366840Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:07:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:07:24.583999 containerd[1462]: time="2025-08-13T07:07:24.583915903Z" level=info msg="TearDown network for sandbox \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" successfully" Aug 13 07:07:24.583999 containerd[1462]: time="2025-08-13T07:07:24.583964426Z" level=info msg="StopPodSandbox for \"a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b\" returns successfully" Aug 13 07:07:24.642184 kubelet[2513]: I0813 07:07:24.642119 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-lib-modules\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.642184 kubelet[2513]: I0813 07:07:24.642158 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-net\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.642184 kubelet[2513]: I0813 07:07:24.642177 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-cgroup\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.642184 kubelet[2513]: I0813 07:07:24.642199 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-config-path\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.642184 kubelet[2513]: I0813 07:07:24.642219 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-run\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643067 kubelet[2513]: I0813 07:07:24.642237 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1762ef61-e59e-426a-b12a-82523ec4adb4-cilium-config-path\") pod \"1762ef61-e59e-426a-b12a-82523ec4adb4\" (UID: \"1762ef61-e59e-426a-b12a-82523ec4adb4\") " Aug 13 07:07:24.643067 kubelet[2513]: I0813 07:07:24.642250 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-hostproc\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643067 kubelet[2513]: I0813 07:07:24.642241 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643067 kubelet[2513]: I0813 07:07:24.642272 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd72j\" (UniqueName: \"kubernetes.io/projected/1762ef61-e59e-426a-b12a-82523ec4adb4-kube-api-access-bd72j\") pod \"1762ef61-e59e-426a-b12a-82523ec4adb4\" (UID: \"1762ef61-e59e-426a-b12a-82523ec4adb4\") " Aug 13 07:07:24.643067 kubelet[2513]: I0813 07:07:24.642288 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-bpf-maps\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643067 kubelet[2513]: I0813 07:07:24.642305 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-xtables-lock\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643300 kubelet[2513]: I0813 07:07:24.642254 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643300 kubelet[2513]: I0813 07:07:24.642289 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643300 kubelet[2513]: I0813 07:07:24.642301 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643300 kubelet[2513]: I0813 07:07:24.642330 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643300 kubelet[2513]: I0813 07:07:24.642343 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-hostproc" (OuterVolumeSpecName: "hostproc") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643484 kubelet[2513]: I0813 07:07:24.642762 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.643484 kubelet[2513]: I0813 07:07:24.642787 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hct7v\" (UniqueName: \"kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-kube-api-access-hct7v\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643484 kubelet[2513]: I0813 07:07:24.642807 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cf3670b-b616-4414-8278-dbf26e8ecb68-clustermesh-secrets\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643484 kubelet[2513]: I0813 07:07:24.642823 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cni-path\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643484 kubelet[2513]: I0813 07:07:24.642843 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-etc-cni-netd\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643484 kubelet[2513]: I0813 07:07:24.642858 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-hubble-tls\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642872 2513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-kernel\") pod \"8cf3670b-b616-4414-8278-dbf26e8ecb68\" (UID: \"8cf3670b-b616-4414-8278-dbf26e8ecb68\") " Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642910 2513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642919 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642927 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642937 2513 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642946 2513 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642954 2513 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643706 kubelet[2513]: I0813 07:07:24.642965 2513 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.643986 kubelet[2513]: I0813 07:07:24.642984 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.644522 kubelet[2513]: I0813 07:07:24.644467 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cni-path" (OuterVolumeSpecName: "cni-path") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.646697 kubelet[2513]: I0813 07:07:24.646413 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:07:24.646697 kubelet[2513]: I0813 07:07:24.646459 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:07:24.646949 kubelet[2513]: I0813 07:07:24.646853 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1762ef61-e59e-426a-b12a-82523ec4adb4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1762ef61-e59e-426a-b12a-82523ec4adb4" (UID: "1762ef61-e59e-426a-b12a-82523ec4adb4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:07:24.648486 kubelet[2513]: I0813 07:07:24.648448 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-kube-api-access-hct7v" (OuterVolumeSpecName: "kube-api-access-hct7v") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "kube-api-access-hct7v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:07:24.648847 kubelet[2513]: I0813 07:07:24.648809 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1762ef61-e59e-426a-b12a-82523ec4adb4-kube-api-access-bd72j" (OuterVolumeSpecName: "kube-api-access-bd72j") pod "1762ef61-e59e-426a-b12a-82523ec4adb4" (UID: "1762ef61-e59e-426a-b12a-82523ec4adb4"). InnerVolumeSpecName "kube-api-access-bd72j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:07:24.649465 kubelet[2513]: I0813 07:07:24.649351 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:07:24.649943 kubelet[2513]: I0813 07:07:24.649897 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf3670b-b616-4414-8278-dbf26e8ecb68-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8cf3670b-b616-4414-8278-dbf26e8ecb68" (UID: "8cf3670b-b616-4414-8278-dbf26e8ecb68"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:07:24.743429 kubelet[2513]: I0813 07:07:24.743405 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cf3670b-b616-4414-8278-dbf26e8ecb68-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743429 kubelet[2513]: I0813 07:07:24.743427 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1762ef61-e59e-426a-b12a-82523ec4adb4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743437 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bd72j\" (UniqueName: \"kubernetes.io/projected/1762ef61-e59e-426a-b12a-82523ec4adb4-kube-api-access-bd72j\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743445 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hct7v\" (UniqueName: \"kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-kube-api-access-hct7v\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743455 2513 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cf3670b-b616-4414-8278-dbf26e8ecb68-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743463 2513 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743471 2513 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743479 2513 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cf3670b-b616-4414-8278-dbf26e8ecb68-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.743550 kubelet[2513]: I0813 07:07:24.743488 2513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cf3670b-b616-4414-8278-dbf26e8ecb68-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 07:07:24.945197 systemd[1]: Removed slice kubepods-besteffort-pod1762ef61_e59e_426a_b12a_82523ec4adb4.slice - libcontainer container kubepods-besteffort-pod1762ef61_e59e_426a_b12a_82523ec4adb4.slice. Aug 13 07:07:24.947452 systemd[1]: Removed slice kubepods-burstable-pod8cf3670b_b616_4414_8278_dbf26e8ecb68.slice - libcontainer container kubepods-burstable-pod8cf3670b_b616_4414_8278_dbf26e8ecb68.slice. Aug 13 07:07:24.947705 systemd[1]: kubepods-burstable-pod8cf3670b_b616_4414_8278_dbf26e8ecb68.slice: Consumed 7.085s CPU time. Aug 13 07:07:25.187443 kubelet[2513]: I0813 07:07:25.187396 2513 scope.go:117] "RemoveContainer" containerID="561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282" Aug 13 07:07:25.190361 containerd[1462]: time="2025-08-13T07:07:25.189432665Z" level=info msg="RemoveContainer for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\"" Aug 13 07:07:25.215082 containerd[1462]: time="2025-08-13T07:07:25.214920312Z" level=info msg="RemoveContainer for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" returns successfully" Aug 13 07:07:25.215468 kubelet[2513]: I0813 07:07:25.215397 2513 scope.go:117] "RemoveContainer" containerID="2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9" Aug 13 07:07:25.216750 containerd[1462]: time="2025-08-13T07:07:25.216706907Z" level=info msg="RemoveContainer for \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\"" Aug 13 07:07:25.220603 containerd[1462]: time="2025-08-13T07:07:25.220562025Z" level=info msg="RemoveContainer for \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\" returns successfully" Aug 13 07:07:25.220824 kubelet[2513]: I0813 07:07:25.220778 2513 scope.go:117] "RemoveContainer" containerID="758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691" Aug 13 07:07:25.222875 containerd[1462]: time="2025-08-13T07:07:25.222323653Z" level=info msg="RemoveContainer for \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\"" Aug 13 07:07:25.226467 containerd[1462]: time="2025-08-13T07:07:25.226434860Z" level=info msg="RemoveContainer for \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\" returns successfully" Aug 13 07:07:25.226705 kubelet[2513]: I0813 07:07:25.226639 2513 scope.go:117] "RemoveContainer" containerID="1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba" Aug 13 07:07:25.228092 containerd[1462]: time="2025-08-13T07:07:25.228049947Z" level=info msg="RemoveContainer for \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\"" Aug 13 07:07:25.231936 containerd[1462]: time="2025-08-13T07:07:25.231900707Z" level=info msg="RemoveContainer for \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\" returns successfully" Aug 13 07:07:25.232069 kubelet[2513]: I0813 07:07:25.232048 2513 scope.go:117] "RemoveContainer" containerID="63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5" Aug 13 07:07:25.233199 containerd[1462]: time="2025-08-13T07:07:25.233171046Z" level=info msg="RemoveContainer for \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\"" Aug 13 07:07:25.236544 containerd[1462]: time="2025-08-13T07:07:25.236514807Z" level=info msg="RemoveContainer for \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\" returns successfully" Aug 13 07:07:25.236727 kubelet[2513]: I0813 07:07:25.236697 2513 scope.go:117] "RemoveContainer" containerID="561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282" Aug 13 07:07:25.239778 containerd[1462]: time="2025-08-13T07:07:25.239743918Z" level=error msg="ContainerStatus for \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\": not found" Aug 13 07:07:25.239970 kubelet[2513]: E0813 07:07:25.239943 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\": not found" containerID="561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282" Aug 13 07:07:25.240048 kubelet[2513]: I0813 07:07:25.239982 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282"} err="failed to get container status \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\": rpc error: code = NotFound desc = an error occurred when try to find container \"561d6d937136e38d914d03836c3d61ceb3afe18f89b47c5bbe058c3f4a28b282\": not found" Aug 13 07:07:25.240048 kubelet[2513]: I0813 07:07:25.240046 2513 scope.go:117] "RemoveContainer" containerID="2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9" Aug 13 07:07:25.240254 containerd[1462]: time="2025-08-13T07:07:25.240211031Z" level=error msg="ContainerStatus for \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\": not found" Aug 13 07:07:25.240440 kubelet[2513]: E0813 07:07:25.240414 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\": not found" containerID="2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9" Aug 13 07:07:25.240484 kubelet[2513]: I0813 07:07:25.240454 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9"} err="failed to get container status \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e714271e6ff087c83efa58ae3b54e2ffb235b1f1a5c270d4388b8fc3e3d95d9\": not found" Aug 13 07:07:25.240510 kubelet[2513]: I0813 07:07:25.240482 2513 scope.go:117] "RemoveContainer" containerID="758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691" Aug 13 07:07:25.240761 containerd[1462]: time="2025-08-13T07:07:25.240725795Z" level=error msg="ContainerStatus for \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\": not found" Aug 13 07:07:25.240909 kubelet[2513]: E0813 07:07:25.240893 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\": not found" containerID="758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691" Aug 13 07:07:25.240962 kubelet[2513]: I0813 07:07:25.240910 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691"} err="failed to get container status \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\": rpc error: code = NotFound desc = an error occurred when try to find container \"758f64bca354949198b5d288abe9e54e84606d63620e5a919e529cfcd901e691\": not found" Aug 13 07:07:25.240962 kubelet[2513]: I0813 07:07:25.240923 2513 scope.go:117] "RemoveContainer" containerID="1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba" Aug 13 07:07:25.241781 containerd[1462]: time="2025-08-13T07:07:25.241144986Z" level=error msg="ContainerStatus for \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\": not found" Aug 13 07:07:25.241781 containerd[1462]: time="2025-08-13T07:07:25.241671152Z" level=error msg="ContainerStatus for \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\": not found" Aug 13 07:07:25.241872 kubelet[2513]: E0813 07:07:25.241315 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\": not found" containerID="1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba" Aug 13 07:07:25.241872 kubelet[2513]: I0813 07:07:25.241368 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba"} err="failed to get container status \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e31373253c5e23e7574038d9c0ba21af054151160c51412524bb3971c783bba\": not found" Aug 13 07:07:25.241872 kubelet[2513]: I0813 07:07:25.241406 2513 scope.go:117] "RemoveContainer" containerID="63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5" Aug 13 07:07:25.241872 kubelet[2513]: E0813 07:07:25.241837 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\": not found" containerID="63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5" Aug 13 07:07:25.241872 kubelet[2513]: I0813 07:07:25.241859 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5"} err="failed to get container status \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"63b6cf90598da70634c82e954db8b7c3d45cd573f799d3356f76f0c12d29f2e5\": not found" Aug 13 07:07:25.241872 kubelet[2513]: I0813 07:07:25.241876 2513 scope.go:117] "RemoveContainer" containerID="1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b" Aug 13 07:07:25.243293 containerd[1462]: time="2025-08-13T07:07:25.243002287Z" level=info msg="RemoveContainer for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\"" Aug 13 07:07:25.246339 containerd[1462]: time="2025-08-13T07:07:25.246313887Z" level=info msg="RemoveContainer for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" returns successfully" Aug 13 07:07:25.246485 kubelet[2513]: I0813 07:07:25.246452 2513 scope.go:117] "RemoveContainer" containerID="1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b" Aug 13 07:07:25.246721 containerd[1462]: time="2025-08-13T07:07:25.246658715Z" level=error msg="ContainerStatus for \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\": not found" Aug 13 07:07:25.246844 kubelet[2513]: E0813 07:07:25.246820 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\": not found" containerID="1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b" Aug 13 07:07:25.246882 kubelet[2513]: I0813 07:07:25.246844 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b"} err="failed to get container status \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1501ca34d245dcceecb5eb8c91ab694fbc134cead086a84e14fd82ada01f2a5b\": not found" Aug 13 07:07:25.387716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b-rootfs.mount: Deactivated successfully. Aug 13 07:07:25.387848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a06ea6e94d5150718ffab1641f125a89466d294efb1a9aee3b97da7986da319b-shm.mount: Deactivated successfully. Aug 13 07:07:25.387938 systemd[1]: var-lib-kubelet-pods-8cf3670b\x2db616\x2d4414\x2d8278\x2ddbf26e8ecb68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhct7v.mount: Deactivated successfully. Aug 13 07:07:25.388022 systemd[1]: var-lib-kubelet-pods-8cf3670b\x2db616\x2d4414\x2d8278\x2ddbf26e8ecb68-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:07:25.388116 systemd[1]: var-lib-kubelet-pods-8cf3670b\x2db616\x2d4414\x2d8278\x2ddbf26e8ecb68-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:07:25.388190 systemd[1]: var-lib-kubelet-pods-1762ef61\x2de59e\x2d426a\x2db12a\x2d82523ec4adb4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbd72j.mount: Deactivated successfully. Aug 13 07:07:26.325244 sshd[4187]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:26.338945 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:40122.service: Deactivated successfully. Aug 13 07:07:26.341813 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:07:26.343838 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:07:26.351004 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Aug 13 07:07:26.352215 systemd-logind[1445]: Removed session 25. Aug 13 07:07:26.384953 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:26.386609 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:26.390908 systemd-logind[1445]: New session 26 of user core. Aug 13 07:07:26.400931 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:07:26.936896 kubelet[2513]: I0813 07:07:26.936841 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1762ef61-e59e-426a-b12a-82523ec4adb4" path="/var/lib/kubelet/pods/1762ef61-e59e-426a-b12a-82523ec4adb4/volumes" Aug 13 07:07:26.937476 kubelet[2513]: I0813 07:07:26.937455 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf3670b-b616-4414-8278-dbf26e8ecb68" path="/var/lib/kubelet/pods/8cf3670b-b616-4414-8278-dbf26e8ecb68/volumes" Aug 13 07:07:27.052695 sshd[4349]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:27.062537 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:40130.service: Deactivated successfully. Aug 13 07:07:27.065844 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:07:27.069424 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:07:27.076194 systemd[1]: Started sshd@26-10.0.0.45:22-10.0.0.1:40132.service - OpenSSH per-connection server daemon (10.0.0.1:40132). Aug 13 07:07:27.078199 systemd-logind[1445]: Removed session 26. Aug 13 07:07:27.105363 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 40132 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:27.107304 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:27.117619 systemd[1]: Created slice kubepods-burstable-pod6fd2fc35_540b_4ce9_94f0_46ffde6ebe32.slice - libcontainer container kubepods-burstable-pod6fd2fc35_540b_4ce9_94f0_46ffde6ebe32.slice. Aug 13 07:07:27.123599 systemd-logind[1445]: New session 27 of user core. Aug 13 07:07:27.128967 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:07:27.156985 kubelet[2513]: I0813 07:07:27.156929 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-cilium-config-path\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.156985 kubelet[2513]: I0813 07:07:27.156973 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-host-proc-sys-kernel\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157187 kubelet[2513]: I0813 07:07:27.157000 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-hubble-tls\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157187 kubelet[2513]: I0813 07:07:27.157019 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-cilium-run\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157187 kubelet[2513]: I0813 07:07:27.157052 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-bpf-maps\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157187 kubelet[2513]: I0813 07:07:27.157068 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-etc-cni-netd\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157187 kubelet[2513]: I0813 07:07:27.157084 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-host-proc-sys-net\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157187 kubelet[2513]: I0813 07:07:27.157147 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-hostproc\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157346 kubelet[2513]: I0813 07:07:27.157185 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-cni-path\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157346 kubelet[2513]: I0813 07:07:27.157215 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-cilium-ipsec-secrets\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157346 kubelet[2513]: I0813 07:07:27.157233 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zftwz\" (UniqueName: \"kubernetes.io/projected/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-kube-api-access-zftwz\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157346 kubelet[2513]: I0813 07:07:27.157256 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-cilium-cgroup\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157346 kubelet[2513]: I0813 07:07:27.157274 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-lib-modules\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157346 kubelet[2513]: I0813 07:07:27.157311 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-xtables-lock\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.157471 kubelet[2513]: I0813 07:07:27.157327 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fd2fc35-540b-4ce9-94f0-46ffde6ebe32-clustermesh-secrets\") pod \"cilium-mw66m\" (UID: \"6fd2fc35-540b-4ce9-94f0-46ffde6ebe32\") " pod="kube-system/cilium-mw66m" Aug 13 07:07:27.184018 sshd[4361]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:27.192824 systemd[1]: sshd@26-10.0.0.45:22-10.0.0.1:40132.service: Deactivated successfully. Aug 13 07:07:27.194971 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:07:27.196816 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:07:27.207958 systemd[1]: Started sshd@27-10.0.0.45:22-10.0.0.1:40146.service - OpenSSH per-connection server daemon (10.0.0.1:40146). Aug 13 07:07:27.209013 systemd-logind[1445]: Removed session 27. Aug 13 07:07:27.235290 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 40146 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:07:27.236923 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:27.242358 systemd-logind[1445]: New session 28 of user core. Aug 13 07:07:27.249822 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 07:07:27.434005 kubelet[2513]: E0813 07:07:27.433933 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:27.434693 containerd[1462]: time="2025-08-13T07:07:27.434623758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw66m,Uid:6fd2fc35-540b-4ce9-94f0-46ffde6ebe32,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:27.459758 containerd[1462]: time="2025-08-13T07:07:27.459492085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:27.459758 containerd[1462]: time="2025-08-13T07:07:27.459569192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:27.459758 containerd[1462]: time="2025-08-13T07:07:27.459581055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:27.460218 containerd[1462]: time="2025-08-13T07:07:27.460070039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:27.488959 systemd[1]: Started cri-containerd-536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff.scope - libcontainer container 536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff. Aug 13 07:07:27.513531 containerd[1462]: time="2025-08-13T07:07:27.513465741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw66m,Uid:6fd2fc35-540b-4ce9-94f0-46ffde6ebe32,Namespace:kube-system,Attempt:0,} returns sandbox id \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\"" Aug 13 07:07:27.514605 kubelet[2513]: E0813 07:07:27.514542 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:27.523173 containerd[1462]: time="2025-08-13T07:07:27.523106470Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:07:27.536945 containerd[1462]: time="2025-08-13T07:07:27.536865855Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52\"" Aug 13 07:07:27.537624 containerd[1462]: time="2025-08-13T07:07:27.537576251Z" level=info msg="StartContainer for \"562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52\"" Aug 13 07:07:27.575917 systemd[1]: Started cri-containerd-562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52.scope - libcontainer container 562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52. Aug 13 07:07:27.606282 containerd[1462]: time="2025-08-13T07:07:27.606109880Z" level=info msg="StartContainer for \"562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52\" returns successfully" Aug 13 07:07:27.620229 systemd[1]: cri-containerd-562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52.scope: Deactivated successfully. Aug 13 07:07:27.654306 containerd[1462]: time="2025-08-13T07:07:27.654233666Z" level=info msg="shim disconnected" id=562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52 namespace=k8s.io Aug 13 07:07:27.654306 containerd[1462]: time="2025-08-13T07:07:27.654303900Z" level=warning msg="cleaning up after shim disconnected" id=562c59333c653266fa4c9e8b686a3e7b3211c716898420482e41619fe6108b52 namespace=k8s.io Aug 13 07:07:27.654306 containerd[1462]: time="2025-08-13T07:07:27.654313318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:27.934721 kubelet[2513]: E0813 07:07:27.934652 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:27.934968 kubelet[2513]: E0813 07:07:27.934912 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:27.983130 kubelet[2513]: E0813 07:07:27.983087 2513 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:07:28.201393 kubelet[2513]: E0813 07:07:28.200346 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:28.206513 containerd[1462]: time="2025-08-13T07:07:28.206443869Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:07:28.219864 containerd[1462]: time="2025-08-13T07:07:28.219801755Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7\"" Aug 13 07:07:28.220418 containerd[1462]: time="2025-08-13T07:07:28.220394566Z" level=info msg="StartContainer for \"0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7\"" Aug 13 07:07:28.251821 systemd[1]: Started cri-containerd-0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7.scope - libcontainer container 0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7. Aug 13 07:07:28.264259 systemd[1]: run-containerd-runc-k8s.io-536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff-runc.4SWykG.mount: Deactivated successfully. Aug 13 07:07:28.283532 containerd[1462]: time="2025-08-13T07:07:28.283491850Z" level=info msg="StartContainer for \"0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7\" returns successfully" Aug 13 07:07:28.292549 systemd[1]: cri-containerd-0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7.scope: Deactivated successfully. Aug 13 07:07:28.311975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7-rootfs.mount: Deactivated successfully. Aug 13 07:07:28.315926 containerd[1462]: time="2025-08-13T07:07:28.315875481Z" level=info msg="shim disconnected" id=0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7 namespace=k8s.io Aug 13 07:07:28.315926 containerd[1462]: time="2025-08-13T07:07:28.315922751Z" level=warning msg="cleaning up after shim disconnected" id=0c3f963a2e6a5b458ac45aa970e24398246107e4c6a196cf8a9ee4f1440f95d7 namespace=k8s.io Aug 13 07:07:28.316097 containerd[1462]: time="2025-08-13T07:07:28.315930907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:29.204244 kubelet[2513]: E0813 07:07:29.204189 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:29.551490 containerd[1462]: time="2025-08-13T07:07:29.551343504Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:07:29.683080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441388017.mount: Deactivated successfully. Aug 13 07:07:29.686597 containerd[1462]: time="2025-08-13T07:07:29.686548047Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800\"" Aug 13 07:07:29.687252 containerd[1462]: time="2025-08-13T07:07:29.687219758Z" level=info msg="StartContainer for \"c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800\"" Aug 13 07:07:29.722697 systemd[1]: run-containerd-runc-k8s.io-c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800-runc.KqmNp6.mount: Deactivated successfully. Aug 13 07:07:29.740880 systemd[1]: Started cri-containerd-c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800.scope - libcontainer container c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800. Aug 13 07:07:29.775736 systemd[1]: cri-containerd-c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800.scope: Deactivated successfully. Aug 13 07:07:29.778568 containerd[1462]: time="2025-08-13T07:07:29.778515980Z" level=info msg="StartContainer for \"c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800\" returns successfully" Aug 13 07:07:29.805943 containerd[1462]: time="2025-08-13T07:07:29.805809893Z" level=info msg="shim disconnected" id=c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800 namespace=k8s.io Aug 13 07:07:29.805943 containerd[1462]: time="2025-08-13T07:07:29.805864417Z" level=warning msg="cleaning up after shim disconnected" id=c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800 namespace=k8s.io Aug 13 07:07:29.805943 containerd[1462]: time="2025-08-13T07:07:29.805873264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:30.208152 kubelet[2513]: E0813 07:07:30.208111 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:30.214631 containerd[1462]: time="2025-08-13T07:07:30.214583288Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:07:30.227924 containerd[1462]: time="2025-08-13T07:07:30.227867301Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855\"" Aug 13 07:07:30.228451 containerd[1462]: time="2025-08-13T07:07:30.228405769Z" level=info msg="StartContainer for \"72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855\"" Aug 13 07:07:30.261814 systemd[1]: Started cri-containerd-72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855.scope - libcontainer container 72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855. Aug 13 07:07:30.287971 systemd[1]: cri-containerd-72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855.scope: Deactivated successfully. Aug 13 07:07:30.290601 containerd[1462]: time="2025-08-13T07:07:30.290560261Z" level=info msg="StartContainer for \"72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855\" returns successfully" Aug 13 07:07:30.324864 containerd[1462]: time="2025-08-13T07:07:30.324776404Z" level=info msg="shim disconnected" id=72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855 namespace=k8s.io Aug 13 07:07:30.324864 containerd[1462]: time="2025-08-13T07:07:30.324849613Z" level=warning msg="cleaning up after shim disconnected" id=72d51450861361985927b5ee8a390fab54b58231d236e42ef343edaecb968855 namespace=k8s.io Aug 13 07:07:30.324864 containerd[1462]: time="2025-08-13T07:07:30.324862438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:30.678911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7cb1154ef15027d64325c602fc23d5e6ebae2514bf2f47de5808146da2ba800-rootfs.mount: Deactivated successfully. Aug 13 07:07:31.211626 kubelet[2513]: E0813 07:07:31.211574 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:31.217992 containerd[1462]: time="2025-08-13T07:07:31.217942943Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:07:31.242502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089412437.mount: Deactivated successfully. Aug 13 07:07:31.253187 containerd[1462]: time="2025-08-13T07:07:31.253131732Z" level=info msg="CreateContainer within sandbox \"536a8585af82291fd1075f654f1844bff1f28f5646d13118db6dc94bb26042ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f\"" Aug 13 07:07:31.253751 containerd[1462]: time="2025-08-13T07:07:31.253725535Z" level=info msg="StartContainer for \"365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f\"" Aug 13 07:07:31.288873 systemd[1]: Started cri-containerd-365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f.scope - libcontainer container 365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f. Aug 13 07:07:31.347827 containerd[1462]: time="2025-08-13T07:07:31.347765732Z" level=info msg="StartContainer for \"365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f\" returns successfully" Aug 13 07:07:31.772718 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:07:31.934473 kubelet[2513]: E0813 07:07:31.934423 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:32.215382 kubelet[2513]: E0813 07:07:32.215345 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:32.229375 kubelet[2513]: I0813 07:07:32.228701 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mw66m" podStartSLOduration=5.228664968 podStartE2EDuration="5.228664968s" podCreationTimestamp="2025-08-13 07:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:32.228167529 +0000 UTC m=+89.390029008" watchObservedRunningTime="2025-08-13 07:07:32.228664968 +0000 UTC m=+89.390526447" Aug 13 07:07:33.435110 kubelet[2513]: E0813 07:07:33.434966 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:34.019324 systemd[1]: run-containerd-runc-k8s.io-365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f-runc.J9t8Qy.mount: Deactivated successfully. Aug 13 07:07:34.913578 systemd-networkd[1391]: lxc_health: Link UP Aug 13 07:07:34.923345 systemd-networkd[1391]: lxc_health: Gained carrier Aug 13 07:07:35.436960 kubelet[2513]: E0813 07:07:35.436886 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:36.223408 kubelet[2513]: E0813 07:07:36.223356 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:36.229349 systemd-networkd[1391]: lxc_health: Gained IPv6LL Aug 13 07:07:37.224889 kubelet[2513]: E0813 07:07:37.224848 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:38.278299 systemd[1]: run-containerd-runc-k8s.io-365c57fee353e843f30978fcbecf6d14dbf01a08f6f57bd3bde0eb34060bc70f-runc.e3LFUO.mount: Deactivated successfully. Aug 13 07:07:38.936350 kubelet[2513]: E0813 07:07:38.936306 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:07:40.473248 sshd[4369]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:40.478265 systemd[1]: sshd@27-10.0.0.45:22-10.0.0.1:40146.service: Deactivated successfully. Aug 13 07:07:40.480996 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 07:07:40.482186 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Aug 13 07:07:40.483369 systemd-logind[1445]: Removed session 28.