Jul 11 00:22:59.059792 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:18:23 -00 2025 Jul 11 00:22:59.059829 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:22:59.059841 kernel: BIOS-provided physical RAM map: Jul 11 00:22:59.059850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:22:59.059859 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:22:59.059868 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:22:59.059878 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:22:59.059890 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:22:59.059904 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:22:59.059913 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:22:59.059922 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:22:59.059931 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:22:59.059939 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:22:59.059960 kernel: NX (Execute Disable) protection: active Jul 11 00:22:59.059975 kernel: APIC: Static calls initialized Jul 11 00:22:59.059985 kernel: SMBIOS 2.8 present. Jul 11 00:22:59.059998 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:22:59.060008 kernel: DMI: Memory slots populated: 1/1 Jul 11 00:22:59.060018 kernel: Hypervisor detected: KVM Jul 11 00:22:59.060027 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:22:59.060037 kernel: kvm-clock: using sched offset of 5300048132 cycles Jul 11 00:22:59.060048 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:22:59.060058 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:22:59.060072 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:22:59.060082 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:22:59.060092 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:22:59.060102 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:22:59.060112 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:22:59.060122 kernel: Using GB pages for direct mapping Jul 11 00:22:59.060131 kernel: ACPI: Early table checksum verification disabled Jul 11 00:22:59.060141 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:22:59.060189 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060206 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060216 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060225 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:22:59.060235 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060245 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060255 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060265 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:59.060275 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:22:59.060293 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:22:59.060303 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:22:59.060313 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:22:59.060323 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:22:59.060333 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:22:59.060344 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:22:59.060357 kernel: No NUMA configuration found Jul 11 00:22:59.060367 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:22:59.060377 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 11 00:22:59.060387 kernel: Zone ranges: Jul 11 00:22:59.060398 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:22:59.060408 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:22:59.060418 kernel: Normal empty Jul 11 00:22:59.060427 kernel: Device empty Jul 11 00:22:59.060437 kernel: Movable zone start for each node Jul 11 00:22:59.060448 kernel: Early memory node ranges Jul 11 00:22:59.060462 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:22:59.060472 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:22:59.060483 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:22:59.060493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:22:59.060503 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:22:59.060513 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:22:59.060523 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:22:59.060539 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:22:59.060549 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:22:59.060563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:22:59.060573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:22:59.060586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:22:59.060597 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:22:59.060607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:22:59.060617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:22:59.060627 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:22:59.060638 kernel: TSC deadline timer available Jul 11 00:22:59.060648 kernel: CPU topo: Max. logical packages: 1 Jul 11 00:22:59.060661 kernel: CPU topo: Max. logical dies: 1 Jul 11 00:22:59.060671 kernel: CPU topo: Max. dies per package: 1 Jul 11 00:22:59.060681 kernel: CPU topo: Max. threads per core: 1 Jul 11 00:22:59.060691 kernel: CPU topo: Num. cores per package: 4 Jul 11 00:22:59.060701 kernel: CPU topo: Num. threads per package: 4 Jul 11 00:22:59.060711 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 00:22:59.060722 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:22:59.060732 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:22:59.060742 kernel: kvm-guest: setup PV sched yield Jul 11 00:22:59.060752 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:22:59.060766 kernel: Booting paravirtualized kernel on KVM Jul 11 00:22:59.060777 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:22:59.060788 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:22:59.060798 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 00:22:59.060808 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 00:22:59.060818 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:22:59.060828 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:22:59.060839 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:22:59.060851 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:22:59.060865 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:22:59.060875 kernel: random: crng init done Jul 11 00:22:59.060884 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:22:59.060895 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:22:59.060905 kernel: Fallback order for Node 0: 0 Jul 11 00:22:59.060915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 11 00:22:59.060924 kernel: Policy zone: DMA32 Jul 11 00:22:59.060934 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:22:59.060960 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:22:59.060970 kernel: ftrace: allocating 40095 entries in 157 pages Jul 11 00:22:59.060980 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 00:22:59.060987 kernel: Dynamic Preempt: voluntary Jul 11 00:22:59.061007 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:22:59.061017 kernel: rcu: RCU event tracing is enabled. Jul 11 00:22:59.061025 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:22:59.061041 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:22:59.061054 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:22:59.061065 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:22:59.061073 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:22:59.061080 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:22:59.061088 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:59.061096 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:59.061104 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:59.061111 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:22:59.061123 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:22:59.061142 kernel: Console: colour VGA+ 80x25 Jul 11 00:22:59.061171 kernel: printk: legacy console [ttyS0] enabled Jul 11 00:22:59.061179 kernel: ACPI: Core revision 20240827 Jul 11 00:22:59.061187 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:22:59.061198 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:22:59.061206 kernel: x2apic enabled Jul 11 00:22:59.061216 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:22:59.061224 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:22:59.061232 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:22:59.061243 kernel: kvm-guest: setup PV IPIs Jul 11 00:22:59.061251 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:22:59.061259 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 00:22:59.061267 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:22:59.061274 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:22:59.061282 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:22:59.061290 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:22:59.061298 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:22:59.061308 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:22:59.061316 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:22:59.061324 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:22:59.061332 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:22:59.061340 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:22:59.061348 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:22:59.061356 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:22:59.061364 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:22:59.061372 kernel: x86/bugs: return thunk changed Jul 11 00:22:59.061383 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:22:59.061394 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:22:59.061404 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:22:59.061414 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:22:59.061424 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:22:59.061435 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:22:59.061445 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:22:59.061455 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:22:59.061468 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 00:22:59.061479 kernel: landlock: Up and running. Jul 11 00:22:59.061489 kernel: SELinux: Initializing. Jul 11 00:22:59.061499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:22:59.061514 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:22:59.061525 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:22:59.061535 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:22:59.061545 kernel: ... version: 0 Jul 11 00:22:59.061555 kernel: ... bit width: 48 Jul 11 00:22:59.061569 kernel: ... generic registers: 6 Jul 11 00:22:59.061579 kernel: ... value mask: 0000ffffffffffff Jul 11 00:22:59.061589 kernel: ... max period: 00007fffffffffff Jul 11 00:22:59.061599 kernel: ... fixed-purpose events: 0 Jul 11 00:22:59.061610 kernel: ... event mask: 000000000000003f Jul 11 00:22:59.061620 kernel: signal: max sigframe size: 1776 Jul 11 00:22:59.061630 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:22:59.061641 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:22:59.061652 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 00:22:59.061660 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:22:59.061671 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:22:59.061679 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:22:59.061687 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:22:59.061695 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:22:59.061703 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 11 00:22:59.061711 kernel: devtmpfs: initialized Jul 11 00:22:59.061719 kernel: x86/mm: Memory block size: 128MB Jul 11 00:22:59.061728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:22:59.061739 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:22:59.061752 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:22:59.061763 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:22:59.061773 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:22:59.061784 kernel: audit: type=2000 audit(1752193375.970:1): state=initialized audit_enabled=0 res=1 Jul 11 00:22:59.061794 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:22:59.061805 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:22:59.061815 kernel: cpuidle: using governor menu Jul 11 00:22:59.061825 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:22:59.061836 kernel: dca service started, version 1.12.1 Jul 11 00:22:59.061850 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 11 00:22:59.061860 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:22:59.061871 kernel: PCI: Using configuration type 1 for base access Jul 11 00:22:59.061882 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:22:59.061892 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:22:59.061902 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:22:59.061913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:22:59.061923 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:22:59.061936 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:22:59.061957 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:22:59.061966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:22:59.061973 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:22:59.061981 kernel: ACPI: Interpreter enabled Jul 11 00:22:59.061992 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:22:59.062000 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:22:59.062008 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:22:59.062016 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:22:59.062024 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:22:59.062035 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:22:59.062430 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:22:59.062595 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:22:59.062732 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:22:59.062744 kernel: PCI host bridge to bus 0000:00 Jul 11 00:22:59.062886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:22:59.063039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:22:59.063188 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:22:59.063306 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:22:59.063445 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:22:59.063597 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:22:59.063740 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:22:59.063935 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 00:22:59.064144 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 00:22:59.064325 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 11 00:22:59.064477 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 11 00:22:59.064626 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:22:59.064774 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:22:59.064960 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 00:22:59.065135 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 11 00:22:59.065315 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 11 00:22:59.065471 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:22:59.065660 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 00:22:59.065814 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 11 00:22:59.065986 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 11 00:22:59.066131 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:22:59.066296 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 00:22:59.066446 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 11 00:22:59.066605 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 11 00:22:59.066757 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:22:59.066905 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:22:59.067088 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 00:22:59.067262 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:22:59.067440 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 00:22:59.067605 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 11 00:22:59.067753 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 11 00:22:59.067924 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 00:22:59.068086 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 11 00:22:59.068098 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:22:59.068106 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:22:59.068119 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:22:59.068127 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:22:59.068135 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:22:59.068143 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:22:59.068169 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:22:59.068178 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:22:59.068185 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:22:59.068208 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:22:59.068227 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:22:59.068249 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:22:59.068257 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:22:59.068265 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:22:59.068274 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:22:59.068282 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:22:59.068290 kernel: iommu: Default domain type: Translated Jul 11 00:22:59.068298 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:22:59.068306 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:22:59.068314 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:22:59.068325 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:22:59.068333 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:22:59.068480 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:22:59.068617 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:22:59.068738 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:22:59.068749 kernel: vgaarb: loaded Jul 11 00:22:59.068758 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:22:59.068766 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:22:59.068778 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:22:59.068786 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:22:59.068794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:22:59.068802 kernel: pnp: PnP ACPI init Jul 11 00:22:59.068961 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:22:59.068974 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:22:59.068982 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:22:59.068990 kernel: NET: Registered PF_INET protocol family Jul 11 00:22:59.069002 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:22:59.069010 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:22:59.069018 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:22:59.069026 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:22:59.069035 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:22:59.069042 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:22:59.069050 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:22:59.069058 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:22:59.069066 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:22:59.069077 kernel: NET: Registered PF_XDP protocol family Jul 11 00:22:59.069348 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:22:59.069474 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:22:59.069592 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:22:59.069709 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:22:59.069826 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:22:59.069953 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:22:59.069966 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:22:59.069981 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 00:22:59.069990 kernel: Initialise system trusted keyrings Jul 11 00:22:59.070001 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:22:59.070012 kernel: Key type asymmetric registered Jul 11 00:22:59.070022 kernel: Asymmetric key parser 'x509' registered Jul 11 00:22:59.070033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:22:59.070042 kernel: io scheduler mq-deadline registered Jul 11 00:22:59.070052 kernel: io scheduler kyber registered Jul 11 00:22:59.070061 kernel: io scheduler bfq registered Jul 11 00:22:59.070071 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:22:59.070083 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:22:59.070093 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:22:59.070102 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:22:59.070113 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:22:59.070123 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:22:59.070134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:22:59.070145 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:22:59.070172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:22:59.070349 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:22:59.070371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:22:59.070501 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:22:59.070630 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:22:58 UTC (1752193378) Jul 11 00:22:59.070773 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:22:59.070788 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:22:59.070799 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:22:59.070810 kernel: Segment Routing with IPv6 Jul 11 00:22:59.070825 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:22:59.070836 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:22:59.070847 kernel: Key type dns_resolver registered Jul 11 00:22:59.070857 kernel: IPI shorthand broadcast: enabled Jul 11 00:22:59.070868 kernel: sched_clock: Marking stable (3508002371, 126821463)->(3660548254, -25724420) Jul 11 00:22:59.070879 kernel: registered taskstats version 1 Jul 11 00:22:59.070890 kernel: Loading compiled-in X.509 certificates Jul 11 00:22:59.070901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: e2778f992738e32ced6c6a485d2ed31f29141742' Jul 11 00:22:59.070912 kernel: Demotion targets for Node 0: null Jul 11 00:22:59.070926 kernel: Key type .fscrypt registered Jul 11 00:22:59.070936 kernel: Key type fscrypt-provisioning registered Jul 11 00:22:59.070957 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:22:59.070968 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:22:59.070978 kernel: ima: No architecture policies found Jul 11 00:22:59.070988 kernel: clk: Disabling unused clocks Jul 11 00:22:59.070996 kernel: Warning: unable to open an initial console. Jul 11 00:22:59.071004 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 11 00:22:59.071013 kernel: Write protecting the kernel read-only data: 24576k Jul 11 00:22:59.071024 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 00:22:59.071032 kernel: Run /init as init process Jul 11 00:22:59.071040 kernel: with arguments: Jul 11 00:22:59.071048 kernel: /init Jul 11 00:22:59.071056 kernel: with environment: Jul 11 00:22:59.071064 kernel: HOME=/ Jul 11 00:22:59.071072 kernel: TERM=linux Jul 11 00:22:59.071080 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:22:59.071094 systemd[1]: Successfully made /usr/ read-only. Jul 11 00:22:59.071115 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:22:59.071265 systemd[1]: Detected virtualization kvm. Jul 11 00:22:59.071277 systemd[1]: Detected architecture x86-64. Jul 11 00:22:59.071286 systemd[1]: Running in initrd. Jul 11 00:22:59.071294 systemd[1]: No hostname configured, using default hostname. Jul 11 00:22:59.071308 systemd[1]: Hostname set to . Jul 11 00:22:59.071317 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:22:59.071332 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:22:59.072089 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:59.072105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:59.072118 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:22:59.072130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:22:59.072139 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:22:59.072171 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:22:59.072182 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:22:59.072191 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:22:59.072199 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:59.072208 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:59.072217 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:22:59.072226 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:22:59.072238 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:22:59.072247 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:22:59.072256 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:22:59.072265 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:22:59.072274 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:22:59.072283 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 00:22:59.072292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:59.072301 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:59.072311 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:59.072322 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:22:59.072331 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:22:59.072340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:22:59.072351 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:22:59.072365 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 00:22:59.072382 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:22:59.072394 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:22:59.072406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:22:59.072418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:59.072430 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:22:59.072443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:59.072459 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:22:59.072471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:22:59.072527 systemd-journald[219]: Collecting audit messages is disabled. Jul 11 00:22:59.072563 systemd-journald[219]: Journal started Jul 11 00:22:59.072592 systemd-journald[219]: Runtime Journal (/run/log/journal/d0e3b9b31f6a48b5a99ffedf341e7bfd) is 6M, max 48.6M, 42.5M free. Jul 11 00:22:59.057784 systemd-modules-load[220]: Inserted module 'overlay' Jul 11 00:22:59.096753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:22:59.096790 kernel: Bridge firewalling registered Jul 11 00:22:59.095709 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 11 00:22:59.099824 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:22:59.100433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:59.102754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:59.105417 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:22:59.112448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:22:59.115577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:59.119452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:22:59.125364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:22:59.136351 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 00:22:59.137115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:59.137478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:59.143444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:59.145856 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:22:59.148763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:59.166066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:22:59.182488 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:22:59.222009 systemd-resolved[261]: Positive Trust Anchors: Jul 11 00:22:59.222047 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:22:59.222089 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:22:59.225059 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 11 00:22:59.226419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:22:59.231743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:59.314216 kernel: SCSI subsystem initialized Jul 11 00:22:59.324197 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:22:59.371197 kernel: iscsi: registered transport (tcp) Jul 11 00:22:59.401386 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:22:59.401499 kernel: QLogic iSCSI HBA Driver Jul 11 00:22:59.427953 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:22:59.464245 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:22:59.465333 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:22:59.543703 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:22:59.546458 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:22:59.640223 kernel: raid6: avx2x4 gen() 25306 MB/s Jul 11 00:22:59.678227 kernel: raid6: avx2x2 gen() 25974 MB/s Jul 11 00:22:59.695414 kernel: raid6: avx2x1 gen() 22973 MB/s Jul 11 00:22:59.695505 kernel: raid6: using algorithm avx2x2 gen() 25974 MB/s Jul 11 00:22:59.713403 kernel: raid6: .... xor() 15607 MB/s, rmw enabled Jul 11 00:22:59.713535 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:22:59.736203 kernel: xor: automatically using best checksumming function avx Jul 11 00:22:59.947216 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:22:59.957452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:22:59.959493 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:59.998114 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 11 00:23:00.005995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:23:00.010299 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:23:00.048950 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 11 00:23:00.088982 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:23:00.093286 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:23:00.209670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:23:00.214932 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:23:00.262192 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:23:00.276874 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:23:00.277214 kernel: libata version 3.00 loaded. Jul 11 00:23:00.280191 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:23:00.280225 kernel: GPT:9289727 != 19775487 Jul 11 00:23:00.280240 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:23:00.281231 kernel: GPT:9289727 != 19775487 Jul 11 00:23:00.281258 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:23:00.283201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:23:00.283238 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:23:00.302272 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:23:00.305398 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:23:00.305908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:23:00.310175 kernel: AES CTR mode by8 optimization enabled Jul 11 00:23:00.310190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:23:00.315259 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:23:00.321481 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:23:00.321517 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 00:23:00.321816 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 00:23:00.322018 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:23:00.326933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:23:00.330407 kernel: scsi host0: ahci Jul 11 00:23:00.328822 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:23:00.335413 kernel: scsi host1: ahci Jul 11 00:23:00.335689 kernel: scsi host2: ahci Jul 11 00:23:00.339183 kernel: scsi host3: ahci Jul 11 00:23:00.359197 kernel: scsi host4: ahci Jul 11 00:23:00.361183 kernel: scsi host5: ahci Jul 11 00:23:00.361495 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 11 00:23:00.361512 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 11 00:23:00.363383 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 11 00:23:00.363420 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 11 00:23:00.363436 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 11 00:23:00.363449 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 11 00:23:00.392019 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:23:00.425014 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:23:00.426691 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:23:00.427090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:23:00.442820 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:23:00.453540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:23:00.460844 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:23:00.683218 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:23:00.683304 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:23:00.684213 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:23:00.685219 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:23:00.686191 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:23:00.686216 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:23:00.686653 kernel: ata3.00: applying bridge limits Jul 11 00:23:00.688200 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:23:00.688224 kernel: ata3.00: configured for UDMA/100 Jul 11 00:23:00.689188 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:23:00.792373 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:23:00.792792 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:23:00.813235 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:23:00.815828 disk-uuid[632]: Primary Header is updated. Jul 11 00:23:00.815828 disk-uuid[632]: Secondary Entries is updated. Jul 11 00:23:00.815828 disk-uuid[632]: Secondary Header is updated. Jul 11 00:23:00.821200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:23:00.829202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:23:01.181722 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:23:01.188482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:23:01.190120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:23:01.191348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:23:01.193489 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:23:01.215442 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:23:01.853232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:23:01.853427 disk-uuid[633]: The operation has completed successfully. Jul 11 00:23:01.889862 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:23:01.890021 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:23:01.926740 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:23:01.956486 sh[661]: Success Jul 11 00:23:01.976710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:23:01.976785 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:23:01.977912 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 00:23:01.988235 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 00:23:02.026680 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:23:02.029146 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:23:02.043597 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:23:02.073392 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 00:23:02.073492 kernel: BTRFS: device fsid 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (673) Jul 11 00:23:02.075098 kernel: BTRFS info (device dm-0): first mount of filesystem 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 Jul 11 00:23:02.075124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:23:02.076201 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 00:23:02.082551 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:23:02.083371 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:23:02.086001 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:23:02.087003 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:23:02.088949 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:23:02.124804 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (704) Jul 11 00:23:02.124882 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:23:02.124903 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:23:02.125698 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:23:02.135200 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:23:02.137143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:23:02.140756 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:23:02.242461 ignition[749]: Ignition 2.21.0 Jul 11 00:23:02.242476 ignition[749]: Stage: fetch-offline Jul 11 00:23:02.242527 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:23:02.242544 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:23:02.242662 ignition[749]: parsed url from cmdline: "" Jul 11 00:23:02.242667 ignition[749]: no config URL provided Jul 11 00:23:02.242674 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:23:02.242684 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:23:02.242719 ignition[749]: op(1): [started] loading QEMU firmware config module Jul 11 00:23:02.242726 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:23:02.254741 ignition[749]: op(1): [finished] loading QEMU firmware config module Jul 11 00:23:02.256096 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:23:02.259781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:23:02.303707 ignition[749]: parsing config with SHA512: c4af2fb22d6a716824cc7ec0b969800c6723a22a48042f7149ff5e136656cf7538da8bfbc9652b7ca6a5dbecf691808a35eac43481ca2ff08af9c3068ee8bf5c Jul 11 00:23:02.308244 unknown[749]: fetched base config from "system" Jul 11 00:23:02.308262 unknown[749]: fetched user config from "qemu" Jul 11 00:23:02.308784 ignition[749]: fetch-offline: fetch-offline passed Jul 11 00:23:02.308851 ignition[749]: Ignition finished successfully Jul 11 00:23:02.316109 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:23:02.323252 systemd-networkd[851]: lo: Link UP Jul 11 00:23:02.323265 systemd-networkd[851]: lo: Gained carrier Jul 11 00:23:02.327069 systemd-networkd[851]: Enumeration completed Jul 11 00:23:02.327250 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:23:02.327713 systemd[1]: Reached target network.target - Network. Jul 11 00:23:02.332234 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:23:02.333375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:23:02.337143 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:23:02.337176 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:23:02.341954 systemd-networkd[851]: eth0: Link UP Jul 11 00:23:02.341971 systemd-networkd[851]: eth0: Gained carrier Jul 11 00:23:02.341987 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:23:02.363257 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:23:02.374890 ignition[855]: Ignition 2.21.0 Jul 11 00:23:02.374907 ignition[855]: Stage: kargs Jul 11 00:23:02.375108 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:23:02.375123 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:23:02.378803 ignition[855]: kargs: kargs passed Jul 11 00:23:02.378946 ignition[855]: Ignition finished successfully Jul 11 00:23:02.384748 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:23:02.387346 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:23:02.425576 ignition[864]: Ignition 2.21.0 Jul 11 00:23:02.425590 ignition[864]: Stage: disks Jul 11 00:23:02.425733 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:23:02.425743 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:23:02.427258 ignition[864]: disks: disks passed Jul 11 00:23:02.427394 ignition[864]: Ignition finished successfully Jul 11 00:23:02.435365 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:23:02.438131 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:23:02.439517 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:23:02.441905 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:23:02.445036 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:23:02.446122 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:23:02.447895 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:23:02.488461 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.83 Jul 11 00:23:02.488489 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 11 00:23:02.490138 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 00:23:02.654259 systemd-resolved[261]: Detected conflict on linux8 IN A 10.0.0.83 Jul 11 00:23:02.654284 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux8' to 'linux18'. Jul 11 00:23:02.685038 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:23:02.686458 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:23:02.827220 kernel: EXT4-fs (vda9): mounted filesystem b9a26173-6c72-4a5b-b1cb-ad71b806f75e r/w with ordered data mode. Quota mode: none. Jul 11 00:23:02.827966 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:23:02.830169 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:23:02.833756 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:23:02.836385 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:23:02.838366 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:23:02.838416 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:23:02.838440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:23:02.850325 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:23:02.854062 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:23:02.855791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jul 11 00:23:02.855827 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:23:02.855843 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:23:02.856840 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:23:02.863418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:23:02.907303 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:23:02.913108 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:23:02.918738 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:23:02.923032 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:23:03.077383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:23:03.087589 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:23:03.090283 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:23:03.109465 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:23:03.111120 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:23:03.143447 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:23:03.162148 ignition[997]: INFO : Ignition 2.21.0 Jul 11 00:23:03.162148 ignition[997]: INFO : Stage: mount Jul 11 00:23:03.164510 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:23:03.164510 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:23:03.167149 ignition[997]: INFO : mount: mount passed Jul 11 00:23:03.167149 ignition[997]: INFO : Ignition finished successfully Jul 11 00:23:03.167802 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:23:03.170322 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:23:03.202504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:23:03.230207 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 11 00:23:03.232345 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:23:03.232375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:23:03.232388 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:23:03.237087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:23:03.277073 ignition[1026]: INFO : Ignition 2.21.0 Jul 11 00:23:03.277073 ignition[1026]: INFO : Stage: files Jul 11 00:23:03.279469 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:23:03.279469 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:23:03.279469 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:23:03.279469 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:23:03.279469 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:23:03.287587 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:23:03.287587 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:23:03.287587 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:23:03.287587 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:23:03.287587 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 00:23:03.283174 unknown[1026]: wrote ssh authorized keys file for user: core Jul 11 00:23:03.326256 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:23:03.673510 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:23:03.681246 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:23:03.681246 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 11 00:23:04.195764 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:23:04.276437 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:23:04.276437 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:23:04.281363 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:23:04.281363 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:23:04.281363 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:23:04.281363 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:23:04.281363 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:23:04.281363 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:23:04.295991 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:23:04.287427 systemd-networkd[851]: eth0: Gained IPv6LL Jul 11 00:23:04.300068 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:23:04.302237 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:23:04.304224 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:23:04.464947 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:23:04.464947 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:23:04.470614 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 00:23:04.727281 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:23:05.199303 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:23:05.199303 ignition[1026]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:23:05.209137 ignition[1026]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:23:05.414862 ignition[1026]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:23:05.414862 ignition[1026]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:23:05.414862 ignition[1026]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:23:05.414862 ignition[1026]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:23:05.431904 ignition[1026]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:23:05.431904 ignition[1026]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:23:05.431904 ignition[1026]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:23:05.480201 ignition[1026]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:23:05.487444 ignition[1026]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:23:05.489992 ignition[1026]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:23:05.489992 ignition[1026]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:23:05.489992 ignition[1026]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:23:05.489992 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:23:05.489992 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:23:05.489992 ignition[1026]: INFO : files: files passed Jul 11 00:23:05.489992 ignition[1026]: INFO : Ignition finished successfully Jul 11 00:23:05.500349 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:23:05.504824 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:23:05.508292 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:23:05.538223 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:23:05.538370 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:23:05.543127 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:23:05.546671 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:23:05.546671 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:23:05.872823 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:23:05.895217 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:23:05.897559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:23:05.901973 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:23:06.083942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:23:06.084110 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:23:06.096743 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:23:06.098982 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:23:06.101257 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:23:06.102728 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:23:06.143272 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:23:06.147122 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:23:06.194638 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:23:06.199713 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:23:06.201054 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:23:06.228125 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:23:06.228421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:23:06.231654 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:23:06.233853 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:23:06.237134 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:23:06.268531 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:23:06.271018 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:23:06.273479 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:23:06.274747 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:23:06.275139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:23:06.275516 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:23:06.275890 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:23:06.276275 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:23:06.286721 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:23:06.286930 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:23:06.291182 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:23:06.292560 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:23:06.295250 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:23:06.297690 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:23:06.297858 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:23:06.298045 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:23:06.302679 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:23:06.302853 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:23:06.305333 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:23:06.305626 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:23:06.307239 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:23:06.310481 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:23:06.310862 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:23:06.311253 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:23:06.311365 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:23:06.318639 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:23:06.318746 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:23:06.320691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:23:06.320830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:23:06.322499 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:23:06.322614 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:23:06.332666 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:23:06.334622 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:23:06.334801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:23:06.337898 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:23:06.340199 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:23:06.340385 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:23:06.342590 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:23:06.342741 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:23:06.354085 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:23:06.354377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:23:06.370826 ignition[1081]: INFO : Ignition 2.21.0 Jul 11 00:23:06.370826 ignition[1081]: INFO : Stage: umount Jul 11 00:23:06.391517 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:23:06.391517 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:23:06.391517 ignition[1081]: INFO : umount: umount passed Jul 11 00:23:06.391517 ignition[1081]: INFO : Ignition finished successfully Jul 11 00:23:06.375233 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:23:06.375369 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:23:06.390843 systemd[1]: Stopped target network.target - Network. Jul 11 00:23:06.392423 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:23:06.392500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:23:06.395987 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:23:06.396058 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:23:06.397076 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:23:06.397174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:23:06.400934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:23:06.400991 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:23:06.403297 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:23:06.406180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:23:06.408577 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:23:06.409392 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:23:06.409546 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:23:06.413899 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:23:06.414068 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:23:06.419851 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 00:23:06.420165 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:23:06.420312 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:23:06.424923 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 00:23:06.426235 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 00:23:06.448848 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:23:06.448952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:23:06.451184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:23:06.451270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:23:06.455074 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:23:06.456372 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:23:06.456446 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:23:06.456804 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:23:06.456867 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:23:06.462269 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:23:06.462355 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:23:06.463457 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:23:06.463521 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:23:06.467757 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:23:06.471768 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:23:06.471864 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:23:06.482680 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:23:06.482850 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:23:06.485326 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:23:06.485557 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:23:06.488624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:23:06.488771 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:23:06.489920 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:23:06.489981 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:23:06.490626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:23:06.490702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:23:06.491575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:23:06.491645 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:23:06.492463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:23:06.492557 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:23:06.504734 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:23:06.506889 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 00:23:06.507054 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:23:06.510912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:23:06.512006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:23:06.514621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:23:06.514686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:23:06.544492 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 00:23:06.544586 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 00:23:06.544653 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:23:06.552049 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:23:06.552306 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:23:06.554942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:23:06.558964 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:23:06.583053 systemd[1]: Switching root. Jul 11 00:23:06.665076 systemd-journald[219]: Journal stopped Jul 11 00:23:09.374291 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 11 00:23:09.374381 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:23:09.374406 kernel: SELinux: policy capability open_perms=1 Jul 11 00:23:09.374423 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:23:09.374443 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:23:09.374458 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:23:09.374474 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:23:09.374497 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:23:09.374519 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:23:09.374535 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 00:23:09.374557 kernel: audit: type=1403 audit(1752193387.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:23:09.374586 systemd[1]: Successfully loaded SELinux policy in 65.114ms. Jul 11 00:23:09.374621 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.394ms. Jul 11 00:23:09.374638 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:23:09.374654 systemd[1]: Detected virtualization kvm. Jul 11 00:23:09.374677 systemd[1]: Detected architecture x86-64. Jul 11 00:23:09.374702 systemd[1]: Detected first boot. Jul 11 00:23:09.374718 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:23:09.374733 zram_generator::config[1126]: No configuration found. Jul 11 00:23:09.374749 kernel: Guest personality initialized and is inactive Jul 11 00:23:09.374763 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 00:23:09.374780 kernel: Initialized host personality Jul 11 00:23:09.374794 kernel: NET: Registered PF_VSOCK protocol family Jul 11 00:23:09.374808 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:23:09.374835 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 00:23:09.374853 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:23:09.376235 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:23:09.376261 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:23:09.376279 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:23:09.376297 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:23:09.376313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:23:09.376330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:23:09.376348 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:23:09.376380 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:23:09.376398 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:23:09.376415 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:23:09.376436 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:23:09.376453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:23:09.376473 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:23:09.376490 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:23:09.376511 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:23:09.376536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:23:09.376553 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:23:09.376571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:23:09.376589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:23:09.376605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:23:09.376621 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:23:09.376642 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:23:09.376659 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:23:09.376683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:23:09.376713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:23:09.376733 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:23:09.376751 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:23:09.376768 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:23:09.376785 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:23:09.376801 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 00:23:09.376818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:23:09.376835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:23:09.376862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:23:09.376879 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:23:09.376896 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:23:09.376913 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:23:09.376930 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:23:09.376955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:23:09.376972 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:23:09.376990 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:23:09.377006 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:23:09.377032 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:23:09.377049 systemd[1]: Reached target machines.target - Containers. Jul 11 00:23:09.377065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:23:09.377081 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:23:09.377099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:23:09.377118 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:23:09.377134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:23:09.377152 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:23:09.378292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:23:09.378313 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:23:09.378330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:23:09.378347 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:23:09.378365 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:23:09.378382 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:23:09.378404 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:23:09.378425 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:23:09.378442 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:23:09.378467 kernel: fuse: init (API version 7.41) Jul 11 00:23:09.378484 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:23:09.378501 kernel: ACPI: bus type drm_connector registered Jul 11 00:23:09.378517 kernel: loop: module loaded Jul 11 00:23:09.378532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:23:09.378549 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:23:09.378566 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:23:09.378588 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 00:23:09.378652 systemd-journald[1197]: Collecting audit messages is disabled. Jul 11 00:23:09.378940 systemd-journald[1197]: Journal started Jul 11 00:23:09.378985 systemd-journald[1197]: Runtime Journal (/run/log/journal/d0e3b9b31f6a48b5a99ffedf341e7bfd) is 6M, max 48.6M, 42.5M free. Jul 11 00:23:08.798129 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:23:08.824203 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:23:08.825068 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:23:09.381206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:23:09.387463 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:23:09.387558 systemd[1]: Stopped verity-setup.service. Jul 11 00:23:09.393189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:23:09.397297 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:23:09.399133 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:23:09.401024 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:23:09.402683 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:23:09.404924 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:23:09.407771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:23:09.409309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:23:09.411514 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:23:09.413739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:23:09.416836 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:23:09.417304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:23:09.419483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:23:09.419806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:23:09.422605 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:23:09.422940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:23:09.424933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:23:09.425246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:23:09.427564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:23:09.427854 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:23:09.430713 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:23:09.431118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:23:09.433964 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:23:09.436427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:23:09.439304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:23:09.442595 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 00:23:09.463329 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:23:09.469839 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:23:09.474444 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:23:09.475994 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:23:09.476045 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:23:09.478612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 00:23:09.485066 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:23:09.486584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:23:09.489361 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:23:09.494307 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:23:09.495593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:23:09.501543 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:23:09.503221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:23:09.504899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:23:09.515325 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:23:09.517947 systemd-journald[1197]: Time spent on flushing to /var/log/journal/d0e3b9b31f6a48b5a99ffedf341e7bfd is 15.770ms for 984 entries. Jul 11 00:23:09.517947 systemd-journald[1197]: System Journal (/var/log/journal/d0e3b9b31f6a48b5a99ffedf341e7bfd) is 8M, max 195.6M, 187.6M free. Jul 11 00:23:09.634360 systemd-journald[1197]: Received client request to flush runtime journal. Jul 11 00:23:09.634411 kernel: loop0: detected capacity change from 0 to 113872 Jul 11 00:23:09.634426 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:23:09.634439 kernel: loop1: detected capacity change from 0 to 224512 Jul 11 00:23:09.519792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:23:09.523626 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:23:09.527390 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:23:09.545324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:23:09.590249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:23:09.636855 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:23:09.641258 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:23:09.644180 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:23:09.647878 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:23:09.653826 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 00:23:09.658851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:23:09.950177 kernel: loop2: detected capacity change from 0 to 146240 Jul 11 00:23:10.033352 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jul 11 00:23:10.033516 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:23:10.034242 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jul 11 00:23:10.037330 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 00:23:10.040268 kernel: loop3: detected capacity change from 0 to 113872 Jul 11 00:23:10.045785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:23:10.057768 kernel: loop4: detected capacity change from 0 to 224512 Jul 11 00:23:10.078630 kernel: loop5: detected capacity change from 0 to 146240 Jul 11 00:23:10.097333 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:23:10.098129 (sd-merge)[1267]: Merged extensions into '/usr'. Jul 11 00:23:10.104998 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:23:10.105018 systemd[1]: Reloading... Jul 11 00:23:10.196204 zram_generator::config[1294]: No configuration found. Jul 11 00:23:10.212457 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:23:10.348254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:23:10.454014 systemd[1]: Reloading finished in 348 ms. Jul 11 00:23:10.489990 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:23:10.491789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:23:10.521219 systemd[1]: Starting ensure-sysext.service... Jul 11 00:23:10.523748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:23:10.539807 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:23:10.539823 systemd[1]: Reloading... Jul 11 00:23:10.571424 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 00:23:10.571479 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 00:23:10.571959 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:23:10.575014 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:23:10.576445 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:23:10.576955 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 11 00:23:10.577122 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 11 00:23:10.586892 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:23:10.586996 systemd-tmpfiles[1332]: Skipping /boot Jul 11 00:23:10.608183 zram_generator::config[1356]: No configuration found. Jul 11 00:23:10.611554 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:23:10.611579 systemd-tmpfiles[1332]: Skipping /boot Jul 11 00:23:10.720781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:23:10.820036 systemd[1]: Reloading finished in 279 ms. Jul 11 00:23:10.843685 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:23:10.877907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:23:10.887495 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:23:10.890505 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:23:10.893792 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:23:10.905970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:23:10.910120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:23:10.916475 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:23:10.922700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:23:10.922938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:23:10.926599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:23:10.958616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:23:10.962448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:23:10.963870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:23:10.964001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:23:10.972481 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:23:10.976274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:23:10.979480 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:23:10.982266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:23:10.982541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:23:10.983408 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Jul 11 00:23:10.984287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:23:10.984577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:23:10.986518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:23:10.986787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:23:10.996668 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:23:11.003146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:23:11.003449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:23:11.005008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:23:11.007443 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:23:11.015511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:23:11.018353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:23:11.019896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:23:11.020273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:23:11.029869 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:23:11.031246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:23:11.033535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:23:11.034761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:23:11.037603 augenrules[1438]: No rules Jul 11 00:23:11.037529 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:23:11.040176 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:23:11.040548 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:23:11.042542 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:23:11.043538 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:23:11.046703 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:23:11.048943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:23:11.056852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:23:11.059401 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:23:11.059796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:23:11.062470 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:23:11.070071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:23:11.074204 systemd[1]: Finished ensure-sysext.service. Jul 11 00:23:11.097331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:23:11.098925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:23:11.099017 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:23:11.104465 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:23:11.106446 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:23:11.164115 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:23:11.255321 systemd-resolved[1401]: Positive Trust Anchors: Jul 11 00:23:11.255347 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:23:11.255389 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:23:11.259866 systemd-resolved[1401]: Defaulting to hostname 'linux'. Jul 11 00:23:11.261967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:23:11.263265 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:23:11.272202 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:23:11.275598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:23:11.278680 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:23:11.293179 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 11 00:23:11.339211 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:23:11.352254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:23:11.376814 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:23:11.377215 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:23:11.379166 systemd-networkd[1474]: lo: Link UP Jul 11 00:23:11.379674 systemd-networkd[1474]: lo: Gained carrier Jul 11 00:23:11.390447 systemd-networkd[1474]: Enumeration completed Jul 11 00:23:11.390565 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:23:11.391308 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:23:11.391313 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:23:11.392260 systemd[1]: Reached target network.target - Network. Jul 11 00:23:11.392746 systemd-networkd[1474]: eth0: Link UP Jul 11 00:23:11.393136 systemd-networkd[1474]: eth0: Gained carrier Jul 11 00:23:11.393150 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:23:11.397521 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 00:23:11.401437 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:23:11.405500 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:23:11.424313 systemd-networkd[1474]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:23:11.426452 systemd-timesyncd[1478]: Network configuration changed, trying to establish connection. Jul 11 00:23:11.430610 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:23:12.425269 systemd-resolved[1401]: Clock change detected. Flushing caches. Jul 11 00:23:12.425472 systemd-timesyncd[1478]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:23:12.425473 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:23:12.425581 systemd-timesyncd[1478]: Initial clock synchronization to Fri 2025-07-11 00:23:12.425200 UTC. Jul 11 00:23:12.426846 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:23:12.428206 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 00:23:12.429444 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:23:12.431191 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:23:12.431236 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:23:12.432179 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:23:12.433482 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:23:12.435049 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:23:12.437161 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:23:12.439163 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:23:12.443334 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:23:12.452972 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 00:23:12.454431 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 00:23:12.455854 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 00:23:12.465590 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:23:12.467098 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 00:23:12.469494 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 00:23:12.471334 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:23:12.476900 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:23:12.477893 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:23:12.478859 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:23:12.478891 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:23:12.480179 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:23:12.483537 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:23:12.487382 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:23:12.493486 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:23:12.498244 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:23:12.514285 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:23:12.516109 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 00:23:12.518589 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:23:12.523179 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:23:12.531351 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:23:12.535117 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:23:12.564150 jq[1523]: false Jul 11 00:23:12.564505 extend-filesystems[1524]: Found /dev/vda6 Jul 11 00:23:12.542307 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:23:12.548571 oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jul 11 00:23:12.565996 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jul 11 00:23:12.565228 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:23:12.565932 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:23:12.570297 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:23:12.593287 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting users, quitting Jul 11 00:23:12.593287 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:23:12.593287 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing group entry cache Jul 11 00:23:12.593287 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting groups, quitting Jul 11 00:23:12.593287 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:23:12.572302 oslogin_cache_refresh[1526]: Failure getting users, quitting Jul 11 00:23:12.593702 extend-filesystems[1524]: Found /dev/vda9 Jul 11 00:23:12.593702 extend-filesystems[1524]: Checking size of /dev/vda9 Jul 11 00:23:12.590473 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:23:12.572334 oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:23:12.572422 oslogin_cache_refresh[1526]: Refreshing group entry cache Jul 11 00:23:12.592230 oslogin_cache_refresh[1526]: Failure getting groups, quitting Jul 11 00:23:12.592253 oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:23:12.596963 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:23:12.627327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:23:12.641797 extend-filesystems[1524]: Resized partition /dev/vda9 Jul 11 00:23:12.627708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:23:12.648101 jq[1544]: true Jul 11 00:23:12.628180 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 00:23:12.628493 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 00:23:12.639347 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:23:12.639670 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:23:12.642286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:23:12.642547 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:23:12.660131 update_engine[1538]: I20250711 00:23:12.658113 1538 main.cc:92] Flatcar Update Engine starting Jul 11 00:23:12.665916 extend-filesystems[1552]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 00:23:12.693956 jq[1554]: true Jul 11 00:23:12.710995 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:23:12.720669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:23:12.738530 tar[1553]: linux-amd64/LICENSE Jul 11 00:23:12.738829 tar[1553]: linux-amd64/helm Jul 11 00:23:12.740499 kernel: kvm_amd: TSC scaling supported Jul 11 00:23:12.740531 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:23:12.740570 kernel: kvm_amd: Nested Paging enabled Jul 11 00:23:12.741496 kernel: kvm_amd: LBR virtualization supported Jul 11 00:23:12.743291 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:23:12.743318 kernel: kvm_amd: Virtual GIF supported Jul 11 00:23:12.760314 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:23:12.776419 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 00:23:12.777016 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:23:12.779530 systemd-logind[1534]: New seat seat0. Jul 11 00:23:13.017313 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:23:12.906273 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:23:13.017447 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:23:12.963911 dbus-daemon[1521]: [system] SELinux support is enabled Jul 11 00:23:13.018189 update_engine[1538]: I20250711 00:23:12.981590 1538 update_check_scheduler.cc:74] Next update check in 4m1s Jul 11 00:23:12.964193 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:23:12.974383 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 11 00:23:12.966632 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:23:12.966661 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:23:12.966766 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:23:12.966782 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:23:12.987747 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:23:12.996245 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:23:13.030472 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:23:13.057151 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:23:13.061771 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:23:13.113147 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:23:13.113147 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:23:13.113147 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:23:13.112864 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:23:13.113837 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Jul 11 00:23:13.113216 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:23:13.157395 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:23:13.264948 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:23:13.265595 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:23:13.267635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:23:13.273808 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:23:13.306550 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:23:13.309805 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:23:13.314617 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:23:13.360159 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:23:13.364065 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:23:13.391630 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:23:13.393488 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:23:13.397472 containerd[1555]: time="2025-07-11T00:23:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 00:23:13.401321 containerd[1555]: time="2025-07-11T00:23:13.401237384Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.412737030Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.501µs" Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.412785932Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.412816058Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413142691Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413161045Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413225446Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413306769Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413318691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413768755Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413799733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413813729Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:23:13.415763 containerd[1555]: time="2025-07-11T00:23:13.413822857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 00:23:13.416179 containerd[1555]: time="2025-07-11T00:23:13.413941830Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 00:23:13.416179 containerd[1555]: time="2025-07-11T00:23:13.414340277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:23:13.416179 containerd[1555]: time="2025-07-11T00:23:13.414377637Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:23:13.416179 containerd[1555]: time="2025-07-11T00:23:13.414387746Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 00:23:13.416179 containerd[1555]: time="2025-07-11T00:23:13.414469159Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 00:23:13.438913 containerd[1555]: time="2025-07-11T00:23:13.438251078Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 00:23:13.438913 containerd[1555]: time="2025-07-11T00:23:13.438673090Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:23:13.630824 tar[1553]: linux-amd64/README.md Jul 11 00:23:13.660557 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:23:13.683621 containerd[1555]: time="2025-07-11T00:23:13.683547747Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683645931Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683665858Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683678051Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683695334Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683714730Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683730770Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683745798Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683758152Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 00:23:13.683766 containerd[1555]: time="2025-07-11T00:23:13.683772228Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 00:23:13.684244 containerd[1555]: time="2025-07-11T00:23:13.683784902Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 00:23:13.684244 containerd[1555]: time="2025-07-11T00:23:13.683801413Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 00:23:13.684244 containerd[1555]: time="2025-07-11T00:23:13.684241658Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684273218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684288697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684302823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684314325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684327259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684342227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 00:23:13.684355 containerd[1555]: time="2025-07-11T00:23:13.684355822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 00:23:13.684546 containerd[1555]: time="2025-07-11T00:23:13.684384246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 00:23:13.684546 containerd[1555]: time="2025-07-11T00:23:13.684396839Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 00:23:13.684546 containerd[1555]: time="2025-07-11T00:23:13.684407339Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 00:23:13.684546 containerd[1555]: time="2025-07-11T00:23:13.684493771Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 00:23:13.684546 containerd[1555]: time="2025-07-11T00:23:13.684510452Z" level=info msg="Start snapshots syncer" Jul 11 00:23:13.684674 containerd[1555]: time="2025-07-11T00:23:13.684558863Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 00:23:13.684956 containerd[1555]: time="2025-07-11T00:23:13.684870267Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 00:23:13.685212 containerd[1555]: time="2025-07-11T00:23:13.684961819Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 00:23:13.686308 containerd[1555]: time="2025-07-11T00:23:13.686254213Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 00:23:13.686453 containerd[1555]: time="2025-07-11T00:23:13.686413351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 00:23:13.686453 containerd[1555]: time="2025-07-11T00:23:13.686444680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 00:23:13.686510 containerd[1555]: time="2025-07-11T00:23:13.686456051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 00:23:13.686510 containerd[1555]: time="2025-07-11T00:23:13.686499923Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 00:23:13.686608 containerd[1555]: time="2025-07-11T00:23:13.686514220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 00:23:13.686608 containerd[1555]: time="2025-07-11T00:23:13.686524950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 00:23:13.686608 containerd[1555]: time="2025-07-11T00:23:13.686546791Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 00:23:13.686608 containerd[1555]: time="2025-07-11T00:23:13.686568752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 00:23:13.686608 containerd[1555]: time="2025-07-11T00:23:13.686580194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 00:23:13.686608 containerd[1555]: time="2025-07-11T00:23:13.686590694Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686626831Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686644545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686655285Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686667438Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686676484Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686687465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686700129Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686720026Z" level=info msg="runtime interface created" Jul 11 00:23:13.686751 containerd[1555]: time="2025-07-11T00:23:13.686732860Z" level=info msg="created NRI interface" Jul 11 00:23:13.686969 containerd[1555]: time="2025-07-11T00:23:13.686765852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 00:23:13.686969 containerd[1555]: time="2025-07-11T00:23:13.686801459Z" level=info msg="Connect containerd service" Jul 11 00:23:13.686969 containerd[1555]: time="2025-07-11T00:23:13.686845041Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:23:13.687888 containerd[1555]: time="2025-07-11T00:23:13.687835228Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:23:13.864439 containerd[1555]: time="2025-07-11T00:23:13.864338154Z" level=info msg="Start subscribing containerd event" Jul 11 00:23:13.864594 containerd[1555]: time="2025-07-11T00:23:13.864469360Z" level=info msg="Start recovering state" Jul 11 00:23:13.864719 containerd[1555]: time="2025-07-11T00:23:13.864686537Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:23:13.864760 containerd[1555]: time="2025-07-11T00:23:13.864686988Z" level=info msg="Start event monitor" Jul 11 00:23:13.864807 containerd[1555]: time="2025-07-11T00:23:13.864781675Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:23:13.864960 containerd[1555]: time="2025-07-11T00:23:13.864786444Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:23:13.864999 containerd[1555]: time="2025-07-11T00:23:13.864965590Z" level=info msg="Start streaming server" Jul 11 00:23:13.865028 containerd[1555]: time="2025-07-11T00:23:13.864998452Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 00:23:13.865028 containerd[1555]: time="2025-07-11T00:23:13.865014041Z" level=info msg="runtime interface starting up..." Jul 11 00:23:13.865028 containerd[1555]: time="2025-07-11T00:23:13.865025713Z" level=info msg="starting plugins..." Jul 11 00:23:13.865125 containerd[1555]: time="2025-07-11T00:23:13.865097297Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 00:23:13.865416 containerd[1555]: time="2025-07-11T00:23:13.865386289Z" level=info msg="containerd successfully booted in 0.468480s" Jul 11 00:23:13.865560 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:23:13.884991 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:23:13.887768 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:37370.service - OpenSSH per-connection server daemon (10.0.0.1:37370). Jul 11 00:23:13.969810 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 37370 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:13.972133 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:13.979831 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:23:13.982381 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:23:13.984022 systemd-networkd[1474]: eth0: Gained IPv6LL Jul 11 00:23:13.988935 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:23:13.994412 systemd-logind[1534]: New session 1 of user core. Jul 11 00:23:13.995016 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:23:13.998908 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:23:14.003383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:14.012597 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:23:14.030468 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:23:14.045506 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:23:14.048726 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:23:14.049251 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:23:14.051670 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:23:14.053483 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:23:14.056944 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:23:14.060742 systemd-logind[1534]: New session c1 of user core. Jul 11 00:23:14.296441 systemd[1660]: Queued start job for default target default.target. Jul 11 00:23:14.315027 systemd[1660]: Created slice app.slice - User Application Slice. Jul 11 00:23:14.315062 systemd[1660]: Reached target paths.target - Paths. Jul 11 00:23:14.315154 systemd[1660]: Reached target timers.target - Timers. Jul 11 00:23:14.317246 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:23:14.329958 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:23:14.330113 systemd[1660]: Reached target sockets.target - Sockets. Jul 11 00:23:14.330165 systemd[1660]: Reached target basic.target - Basic System. Jul 11 00:23:14.330208 systemd[1660]: Reached target default.target - Main User Target. Jul 11 00:23:14.330243 systemd[1660]: Startup finished in 259ms. Jul 11 00:23:14.330713 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:23:14.333590 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:23:14.404107 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:37384.service - OpenSSH per-connection server daemon (10.0.0.1:37384). Jul 11 00:23:14.467427 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 37384 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:14.469035 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:14.473854 systemd-logind[1534]: New session 2 of user core. Jul 11 00:23:14.481230 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:23:14.539690 sshd[1677]: Connection closed by 10.0.0.1 port 37384 Jul 11 00:23:14.540471 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:14.554761 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:37384.service: Deactivated successfully. Jul 11 00:23:14.556786 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:23:14.557791 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:23:14.560943 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Jul 11 00:23:14.563689 systemd-logind[1534]: Removed session 2. Jul 11 00:23:14.621005 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:14.858905 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:14.868005 systemd-logind[1534]: New session 3 of user core. Jul 11 00:23:14.876822 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:23:14.989692 sshd[1685]: Connection closed by 10.0.0.1 port 37392 Jul 11 00:23:14.990133 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:14.995867 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:37392.service: Deactivated successfully. Jul 11 00:23:14.998179 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:23:14.999009 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:23:15.000793 systemd-logind[1534]: Removed session 3. Jul 11 00:23:15.418588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:15.420818 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:23:15.422388 systemd[1]: Startup finished in 3.577s (kernel) + 9.287s (initrd) + 6.558s (userspace) = 19.423s. Jul 11 00:23:15.454614 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:23:16.401769 kubelet[1695]: E0711 00:23:16.401682 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:23:16.406489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:23:16.406802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:23:16.407362 systemd[1]: kubelet.service: Consumed 2.017s CPU time, 266.4M memory peak. Jul 11 00:23:25.015244 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:51258.service - OpenSSH per-connection server daemon (10.0.0.1:51258). Jul 11 00:23:25.085487 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 51258 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:25.087537 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:25.093156 systemd-logind[1534]: New session 4 of user core. Jul 11 00:23:25.103438 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:23:25.160145 sshd[1710]: Connection closed by 10.0.0.1 port 51258 Jul 11 00:23:25.160575 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:25.174585 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:51258.service: Deactivated successfully. Jul 11 00:23:25.176896 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:23:25.177883 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:23:25.181523 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:51266.service - OpenSSH per-connection server daemon (10.0.0.1:51266). Jul 11 00:23:25.183295 systemd-logind[1534]: Removed session 4. Jul 11 00:23:25.250732 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 51266 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:25.252844 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:25.260978 systemd-logind[1534]: New session 5 of user core. Jul 11 00:23:25.270523 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:23:25.323265 sshd[1718]: Connection closed by 10.0.0.1 port 51266 Jul 11 00:23:25.323707 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:25.339330 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:51266.service: Deactivated successfully. Jul 11 00:23:25.341212 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:23:25.342045 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:23:25.344976 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:51282.service - OpenSSH per-connection server daemon (10.0.0.1:51282). Jul 11 00:23:25.345595 systemd-logind[1534]: Removed session 5. Jul 11 00:23:25.406546 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 51282 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:25.408531 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:25.414344 systemd-logind[1534]: New session 6 of user core. Jul 11 00:23:25.428501 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:23:25.488694 sshd[1726]: Connection closed by 10.0.0.1 port 51282 Jul 11 00:23:25.489262 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:25.501325 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:51282.service: Deactivated successfully. Jul 11 00:23:25.503907 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:23:25.504918 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:23:25.509010 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:51284.service - OpenSSH per-connection server daemon (10.0.0.1:51284). Jul 11 00:23:25.509905 systemd-logind[1534]: Removed session 6. Jul 11 00:23:25.586009 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 51284 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:25.588108 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:25.594748 systemd-logind[1534]: New session 7 of user core. Jul 11 00:23:25.604316 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:23:25.666741 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:23:25.667156 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:23:25.686490 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 11 00:23:25.688716 sshd[1734]: Connection closed by 10.0.0.1 port 51284 Jul 11 00:23:25.689180 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:25.709575 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:51284.service: Deactivated successfully. Jul 11 00:23:25.712203 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:23:25.713325 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:23:25.718102 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:51286.service - OpenSSH per-connection server daemon (10.0.0.1:51286). Jul 11 00:23:25.719311 systemd-logind[1534]: Removed session 7. Jul 11 00:23:25.783152 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 51286 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:25.785078 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:25.795903 systemd-logind[1534]: New session 8 of user core. Jul 11 00:23:25.805366 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:23:25.865939 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:23:25.866408 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:23:25.875704 sudo[1745]: pam_unix(sudo:session): session closed for user root Jul 11 00:23:25.884402 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 00:23:25.884732 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:23:25.897818 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:23:25.958187 augenrules[1767]: No rules Jul 11 00:23:25.960066 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:23:25.960395 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:23:25.961575 sudo[1744]: pam_unix(sudo:session): session closed for user root Jul 11 00:23:25.963323 sshd[1743]: Connection closed by 10.0.0.1 port 51286 Jul 11 00:23:25.963706 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:25.978238 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:51286.service: Deactivated successfully. Jul 11 00:23:25.980788 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:23:25.981803 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:23:25.985831 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:51294.service - OpenSSH per-connection server daemon (10.0.0.1:51294). Jul 11 00:23:25.986761 systemd-logind[1534]: Removed session 8. Jul 11 00:23:26.057195 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 51294 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:26.059010 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:26.064484 systemd-logind[1534]: New session 9 of user core. Jul 11 00:23:26.073660 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:23:26.131972 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:23:26.132390 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:23:26.534461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:23:26.536258 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:23:26.537963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:26.558716 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:23:26.821631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:26.837298 dockerd[1799]: time="2025-07-11T00:23:26.835423819Z" level=info msg="Starting up" Jul 11 00:23:26.838192 dockerd[1799]: time="2025-07-11T00:23:26.838164088Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 00:23:26.842636 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:23:26.894807 kubelet[1817]: E0711 00:23:26.894715 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:23:26.903864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:23:26.904193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:23:26.904772 systemd[1]: kubelet.service: Consumed 263ms CPU time, 110M memory peak. Jul 11 00:23:27.270304 dockerd[1799]: time="2025-07-11T00:23:27.270114329Z" level=info msg="Loading containers: start." Jul 11 00:23:27.283154 kernel: Initializing XFRM netlink socket Jul 11 00:23:27.629356 systemd-networkd[1474]: docker0: Link UP Jul 11 00:23:27.635815 dockerd[1799]: time="2025-07-11T00:23:27.635757581Z" level=info msg="Loading containers: done." Jul 11 00:23:27.652995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1621078850-merged.mount: Deactivated successfully. Jul 11 00:23:27.654635 dockerd[1799]: time="2025-07-11T00:23:27.654573878Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:23:27.654726 dockerd[1799]: time="2025-07-11T00:23:27.654703321Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 11 00:23:27.654903 dockerd[1799]: time="2025-07-11T00:23:27.654875694Z" level=info msg="Initializing buildkit" Jul 11 00:23:27.693024 dockerd[1799]: time="2025-07-11T00:23:27.692953313Z" level=info msg="Completed buildkit initialization" Jul 11 00:23:27.698934 dockerd[1799]: time="2025-07-11T00:23:27.698873435Z" level=info msg="Daemon has completed initialization" Jul 11 00:23:27.699588 dockerd[1799]: time="2025-07-11T00:23:27.699000394Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:23:27.699227 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:23:28.437697 containerd[1555]: time="2025-07-11T00:23:28.437051803Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 00:23:29.162632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848342008.mount: Deactivated successfully. Jul 11 00:23:30.289964 containerd[1555]: time="2025-07-11T00:23:30.289898893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:30.290541 containerd[1555]: time="2025-07-11T00:23:30.290492536Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 11 00:23:30.291728 containerd[1555]: time="2025-07-11T00:23:30.291702245Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:30.294295 containerd[1555]: time="2025-07-11T00:23:30.294261104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:30.295059 containerd[1555]: time="2025-07-11T00:23:30.295032861Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.857867996s" Jul 11 00:23:30.295139 containerd[1555]: time="2025-07-11T00:23:30.295067556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 11 00:23:30.295576 containerd[1555]: time="2025-07-11T00:23:30.295546584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 00:23:32.150241 containerd[1555]: time="2025-07-11T00:23:32.150129479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:32.151785 containerd[1555]: time="2025-07-11T00:23:32.151739849Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 11 00:23:32.153603 containerd[1555]: time="2025-07-11T00:23:32.153526530Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:32.156936 containerd[1555]: time="2025-07-11T00:23:32.156865382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:32.158064 containerd[1555]: time="2025-07-11T00:23:32.158009597Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.862428618s" Jul 11 00:23:32.158064 containerd[1555]: time="2025-07-11T00:23:32.158053470Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 11 00:23:32.158845 containerd[1555]: time="2025-07-11T00:23:32.158634349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 00:23:35.650197 containerd[1555]: time="2025-07-11T00:23:35.650110132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:35.651382 containerd[1555]: time="2025-07-11T00:23:35.651311415Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 11 00:23:35.652776 containerd[1555]: time="2025-07-11T00:23:35.652728633Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:35.655851 containerd[1555]: time="2025-07-11T00:23:35.655793791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:35.656866 containerd[1555]: time="2025-07-11T00:23:35.656790250Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 3.498113051s" Jul 11 00:23:35.656866 containerd[1555]: time="2025-07-11T00:23:35.656830566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 11 00:23:35.657514 containerd[1555]: time="2025-07-11T00:23:35.657410784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:23:37.054055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:23:37.056705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:37.095853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655007808.mount: Deactivated successfully. Jul 11 00:23:37.274297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:37.278975 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:23:37.331576 kubelet[2104]: E0711 00:23:37.331404 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:23:37.335545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:23:37.335750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:23:37.336197 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.7M memory peak. Jul 11 00:23:38.171224 containerd[1555]: time="2025-07-11T00:23:38.171132068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:38.172126 containerd[1555]: time="2025-07-11T00:23:38.172067122Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 11 00:23:38.173427 containerd[1555]: time="2025-07-11T00:23:38.173381517Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:38.175553 containerd[1555]: time="2025-07-11T00:23:38.175498778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:38.176162 containerd[1555]: time="2025-07-11T00:23:38.176118289Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.518640611s" Jul 11 00:23:38.176162 containerd[1555]: time="2025-07-11T00:23:38.176155780Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 11 00:23:38.176674 containerd[1555]: time="2025-07-11T00:23:38.176630771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:23:38.703633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762194857.mount: Deactivated successfully. Jul 11 00:23:41.036157 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1490862652 wd_nsec: 1490862109 Jul 11 00:23:43.010028 containerd[1555]: time="2025-07-11T00:23:43.009822620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:43.011457 containerd[1555]: time="2025-07-11T00:23:43.010933504Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:23:43.012639 containerd[1555]: time="2025-07-11T00:23:43.012567588Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:43.016263 containerd[1555]: time="2025-07-11T00:23:43.016186856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:43.017391 containerd[1555]: time="2025-07-11T00:23:43.017329208Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.840652542s" Jul 11 00:23:43.017391 containerd[1555]: time="2025-07-11T00:23:43.017410691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:23:43.018415 containerd[1555]: time="2025-07-11T00:23:43.018373376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:23:46.580196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232063455.mount: Deactivated successfully. Jul 11 00:23:47.247584 containerd[1555]: time="2025-07-11T00:23:47.247493672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:23:47.374057 containerd[1555]: time="2025-07-11T00:23:47.373925703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:23:47.410040 containerd[1555]: time="2025-07-11T00:23:47.409961993Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:23:47.537214 containerd[1555]: time="2025-07-11T00:23:47.536979824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:23:47.537698 containerd[1555]: time="2025-07-11T00:23:47.537627469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.519202807s" Jul 11 00:23:47.537698 containerd[1555]: time="2025-07-11T00:23:47.537688546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:23:47.538369 containerd[1555]: time="2025-07-11T00:23:47.538329599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 00:23:47.554196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:23:47.557171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:47.811925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:47.830557 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:23:47.953954 kubelet[2180]: E0711 00:23:47.953869 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:23:47.959027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:23:47.959336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:23:47.960058 systemd[1]: kubelet.service: Consumed 306ms CPU time, 110.7M memory peak. Jul 11 00:23:50.992352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211120986.mount: Deactivated successfully. Jul 11 00:23:53.576611 containerd[1555]: time="2025-07-11T00:23:53.575654496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:53.577540 containerd[1555]: time="2025-07-11T00:23:53.577427771Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 11 00:23:53.579367 containerd[1555]: time="2025-07-11T00:23:53.579296778Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:53.582454 containerd[1555]: time="2025-07-11T00:23:53.582406139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:53.583680 containerd[1555]: time="2025-07-11T00:23:53.583650080Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.045287679s" Jul 11 00:23:53.583765 containerd[1555]: time="2025-07-11T00:23:53.583688073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 11 00:23:56.552315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:56.552481 systemd[1]: kubelet.service: Consumed 306ms CPU time, 110.7M memory peak. Jul 11 00:23:56.554917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:56.579196 systemd[1]: Reload requested from client PID 2273 ('systemctl') (unit session-9.scope)... Jul 11 00:23:56.579228 systemd[1]: Reloading... Jul 11 00:23:56.681160 zram_generator::config[2325]: No configuration found. Jul 11 00:23:57.828806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:23:57.983145 systemd[1]: Reloading finished in 1403 ms. Jul 11 00:23:58.004468 update_engine[1538]: I20250711 00:23:58.004340 1538 update_attempter.cc:509] Updating boot flags... Jul 11 00:23:58.064139 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:23:58.064267 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:23:58.064625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:58.064685 systemd[1]: kubelet.service: Consumed 176ms CPU time, 98.2M memory peak. Jul 11 00:23:58.067007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:58.438918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:58.452899 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:23:58.507367 kubelet[2379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:23:58.507367 kubelet[2379]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:23:58.507367 kubelet[2379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:23:58.507859 kubelet[2379]: I0711 00:23:58.507397 2379 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:23:58.900001 kubelet[2379]: I0711 00:23:58.899932 2379 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:23:58.900001 kubelet[2379]: I0711 00:23:58.899969 2379 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:23:58.900371 kubelet[2379]: I0711 00:23:58.900339 2379 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:23:58.957007 kubelet[2379]: E0711 00:23:58.956950 2379 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:23:58.958626 kubelet[2379]: I0711 00:23:58.958584 2379 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:23:58.972103 kubelet[2379]: I0711 00:23:58.972043 2379 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:23:58.977765 kubelet[2379]: I0711 00:23:58.977700 2379 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:23:58.978090 kubelet[2379]: I0711 00:23:58.978033 2379 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:23:58.978385 kubelet[2379]: I0711 00:23:58.978075 2379 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:23:58.978515 kubelet[2379]: I0711 00:23:58.978397 2379 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:23:58.978515 kubelet[2379]: I0711 00:23:58.978408 2379 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:23:58.978600 kubelet[2379]: I0711 00:23:58.978582 2379 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:23:59.092055 kubelet[2379]: I0711 00:23:59.091965 2379 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:23:59.094385 kubelet[2379]: I0711 00:23:59.094346 2379 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:23:59.094457 kubelet[2379]: I0711 00:23:59.094400 2379 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:23:59.094457 kubelet[2379]: I0711 00:23:59.094419 2379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:23:59.098284 kubelet[2379]: I0711 00:23:59.097822 2379 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:23:59.098421 kubelet[2379]: I0711 00:23:59.098331 2379 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:23:59.098455 kubelet[2379]: W0711 00:23:59.098403 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:23:59.098524 kubelet[2379]: W0711 00:23:59.098396 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:23:59.098524 kubelet[2379]: E0711 00:23:59.098509 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:23:59.098594 kubelet[2379]: E0711 00:23:59.098545 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:23:59.099043 kubelet[2379]: W0711 00:23:59.098997 2379 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:23:59.101584 kubelet[2379]: I0711 00:23:59.101539 2379 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:23:59.101584 kubelet[2379]: I0711 00:23:59.101592 2379 server.go:1287] "Started kubelet" Jul 11 00:23:59.103398 kubelet[2379]: I0711 00:23:59.103131 2379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:23:59.103398 kubelet[2379]: I0711 00:23:59.103309 2379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:23:59.103713 kubelet[2379]: I0711 00:23:59.103676 2379 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:23:59.103887 kubelet[2379]: I0711 00:23:59.103766 2379 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:23:59.105024 kubelet[2379]: I0711 00:23:59.104997 2379 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:23:59.117177 kubelet[2379]: I0711 00:23:59.117137 2379 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:23:59.117554 kubelet[2379]: I0711 00:23:59.117535 2379 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:23:59.119229 kubelet[2379]: E0711 00:23:59.119186 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:59.119776 kubelet[2379]: I0711 00:23:59.119754 2379 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:23:59.119854 kubelet[2379]: I0711 00:23:59.119839 2379 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:23:59.120015 kubelet[2379]: E0711 00:23:59.119979 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Jul 11 00:23:59.120073 kubelet[2379]: W0711 00:23:59.120021 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:23:59.120414 kubelet[2379]: I0711 00:23:59.120390 2379 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:23:59.120548 kubelet[2379]: I0711 00:23:59.120527 2379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:23:59.123406 kubelet[2379]: I0711 00:23:59.123371 2379 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:23:59.126689 kubelet[2379]: E0711 00:23:59.122344 2379 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510aa63b6b2e3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:23:59.101562426 +0000 UTC m=+0.642603519,LastTimestamp:2025-07-11 00:23:59.101562426 +0000 UTC m=+0.642603519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:23:59.127409 kubelet[2379]: E0711 00:23:59.120102 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:23:59.129774 kubelet[2379]: E0711 00:23:59.129384 2379 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:23:59.143632 kubelet[2379]: I0711 00:23:59.143547 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:23:59.145120 kubelet[2379]: I0711 00:23:59.145075 2379 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:23:59.145120 kubelet[2379]: I0711 00:23:59.145107 2379 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:23:59.145220 kubelet[2379]: I0711 00:23:59.145128 2379 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:23:59.145325 kubelet[2379]: I0711 00:23:59.145302 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:23:59.145389 kubelet[2379]: I0711 00:23:59.145341 2379 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:23:59.145389 kubelet[2379]: I0711 00:23:59.145371 2379 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:23:59.145389 kubelet[2379]: I0711 00:23:59.145384 2379 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:23:59.145488 kubelet[2379]: E0711 00:23:59.145442 2379 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:23:59.146923 kubelet[2379]: W0711 00:23:59.146840 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:23:59.146923 kubelet[2379]: E0711 00:23:59.146900 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:23:59.219743 kubelet[2379]: E0711 00:23:59.219529 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:59.246244 kubelet[2379]: E0711 00:23:59.246105 2379 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:23:59.320750 kubelet[2379]: E0711 00:23:59.320693 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:59.321157 kubelet[2379]: E0711 00:23:59.321111 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Jul 11 00:23:59.421464 kubelet[2379]: E0711 00:23:59.421378 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:59.446959 kubelet[2379]: E0711 00:23:59.446883 2379 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:23:59.509636 kubelet[2379]: I0711 00:23:59.509440 2379 policy_none.go:49] "None policy: Start" Jul 11 00:23:59.509636 kubelet[2379]: I0711 00:23:59.509501 2379 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:23:59.509636 kubelet[2379]: I0711 00:23:59.509533 2379 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:23:59.522463 kubelet[2379]: E0711 00:23:59.522371 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:59.577455 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:23:59.601071 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:23:59.607095 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:23:59.623512 kubelet[2379]: E0711 00:23:59.623443 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:59.634981 kubelet[2379]: I0711 00:23:59.634854 2379 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:23:59.635268 kubelet[2379]: I0711 00:23:59.635231 2379 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:23:59.635418 kubelet[2379]: I0711 00:23:59.635255 2379 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:23:59.635694 kubelet[2379]: I0711 00:23:59.635642 2379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:23:59.636580 kubelet[2379]: E0711 00:23:59.636552 2379 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:23:59.636632 kubelet[2379]: E0711 00:23:59.636617 2379 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:23:59.722626 kubelet[2379]: E0711 00:23:59.722563 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Jul 11 00:23:59.738949 kubelet[2379]: I0711 00:23:59.738908 2379 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:23:59.739502 kubelet[2379]: E0711 00:23:59.739465 2379 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 11 00:23:59.858705 systemd[1]: Created slice kubepods-burstable-pod4b7d19f97d12cf9a4182298820446bd1.slice - libcontainer container kubepods-burstable-pod4b7d19f97d12cf9a4182298820446bd1.slice. Jul 11 00:23:59.890697 kubelet[2379]: E0711 00:23:59.890648 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:23:59.893313 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 00:23:59.907583 kubelet[2379]: E0711 00:23:59.907539 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:23:59.911193 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 00:23:59.913401 kubelet[2379]: E0711 00:23:59.913364 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:23:59.924964 kubelet[2379]: I0711 00:23:59.924900 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:59.924964 kubelet[2379]: I0711 00:23:59.924946 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:59.924964 kubelet[2379]: I0711 00:23:59.924966 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:59.925248 kubelet[2379]: I0711 00:23:59.924986 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:59.925248 kubelet[2379]: I0711 00:23:59.925007 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:59.925248 kubelet[2379]: I0711 00:23:59.925023 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:59.925248 kubelet[2379]: I0711 00:23:59.925042 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7d19f97d12cf9a4182298820446bd1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b7d19f97d12cf9a4182298820446bd1\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:59.925248 kubelet[2379]: I0711 00:23:59.925150 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7d19f97d12cf9a4182298820446bd1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b7d19f97d12cf9a4182298820446bd1\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:59.925374 kubelet[2379]: I0711 00:23:59.925199 2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7d19f97d12cf9a4182298820446bd1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b7d19f97d12cf9a4182298820446bd1\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:59.941573 kubelet[2379]: I0711 00:23:59.941537 2379 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:23:59.942163 kubelet[2379]: E0711 00:23:59.942052 2379 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 11 00:24:00.191988 kubelet[2379]: E0711 00:24:00.191602 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:00.192705 containerd[1555]: time="2025-07-11T00:24:00.192607986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b7d19f97d12cf9a4182298820446bd1,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:00.201784 kubelet[2379]: W0711 00:24:00.201713 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:24:00.201784 kubelet[2379]: E0711 00:24:00.201786 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:24:00.208407 kubelet[2379]: E0711 00:24:00.208337 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:00.209060 containerd[1555]: time="2025-07-11T00:24:00.208991656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:00.214525 kubelet[2379]: E0711 00:24:00.214484 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:00.215159 containerd[1555]: time="2025-07-11T00:24:00.215064431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:00.324026 kubelet[2379]: W0711 00:24:00.323913 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:24:00.324026 kubelet[2379]: E0711 00:24:00.324002 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:24:00.344094 kubelet[2379]: I0711 00:24:00.344029 2379 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:24:00.344471 kubelet[2379]: E0711 00:24:00.344414 2379 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 11 00:24:00.437901 kubelet[2379]: W0711 00:24:00.437810 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:24:00.437901 kubelet[2379]: E0711 00:24:00.437900 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:24:00.472190 containerd[1555]: time="2025-07-11T00:24:00.471539876Z" level=info msg="connecting to shim 17280565c45c363497acfee650e3dd8388cdbc4184cadbebbaf5887c81bf49e9" address="unix:///run/containerd/s/5c4671cdc53ce84d5a0a00155403fed9322ec1aa06b1d13e3329b75b7e0e3463" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:00.500658 containerd[1555]: time="2025-07-11T00:24:00.500579791Z" level=info msg="connecting to shim 4e95a924b89e6140188f3b37b534cfca31991de8583553068257a382be4958e7" address="unix:///run/containerd/s/5d7c5a966659badc4f9af9946c9479e4544866dc6032096afbf9286454f2876f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:00.507340 containerd[1555]: time="2025-07-11T00:24:00.507283879Z" level=info msg="connecting to shim 6ed9caba56daf0c4cd9389c7e6a1dd039bfc627ac2824f471ce98fe088f88a56" address="unix:///run/containerd/s/e2e713069c34e5225a43da10351c1d29f6b2b88ff6bc328741aeb34512240b65" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:00.523635 kubelet[2379]: E0711 00:24:00.523544 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="1.6s" Jul 11 00:24:00.532386 systemd[1]: Started cri-containerd-17280565c45c363497acfee650e3dd8388cdbc4184cadbebbaf5887c81bf49e9.scope - libcontainer container 17280565c45c363497acfee650e3dd8388cdbc4184cadbebbaf5887c81bf49e9. Jul 11 00:24:00.548274 systemd[1]: Started cri-containerd-4e95a924b89e6140188f3b37b534cfca31991de8583553068257a382be4958e7.scope - libcontainer container 4e95a924b89e6140188f3b37b534cfca31991de8583553068257a382be4958e7. Jul 11 00:24:00.570368 systemd[1]: Started cri-containerd-6ed9caba56daf0c4cd9389c7e6a1dd039bfc627ac2824f471ce98fe088f88a56.scope - libcontainer container 6ed9caba56daf0c4cd9389c7e6a1dd039bfc627ac2824f471ce98fe088f88a56. Jul 11 00:24:00.593960 kubelet[2379]: W0711 00:24:00.593880 2379 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 11 00:24:00.593960 kubelet[2379]: E0711 00:24:00.593964 2379 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:24:00.768071 containerd[1555]: time="2025-07-11T00:24:00.767871995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b7d19f97d12cf9a4182298820446bd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"17280565c45c363497acfee650e3dd8388cdbc4184cadbebbaf5887c81bf49e9\"" Jul 11 00:24:00.769794 kubelet[2379]: E0711 00:24:00.769736 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:00.772191 containerd[1555]: time="2025-07-11T00:24:00.772134459Z" level=info msg="CreateContainer within sandbox \"17280565c45c363497acfee650e3dd8388cdbc4184cadbebbaf5887c81bf49e9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:24:00.837969 containerd[1555]: time="2025-07-11T00:24:00.837892217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ed9caba56daf0c4cd9389c7e6a1dd039bfc627ac2824f471ce98fe088f88a56\"" Jul 11 00:24:00.839253 kubelet[2379]: E0711 00:24:00.839219 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:00.841100 containerd[1555]: time="2025-07-11T00:24:00.841036047Z" level=info msg="CreateContainer within sandbox \"6ed9caba56daf0c4cd9389c7e6a1dd039bfc627ac2824f471ce98fe088f88a56\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:24:01.035652 containerd[1555]: time="2025-07-11T00:24:01.035492804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e95a924b89e6140188f3b37b534cfca31991de8583553068257a382be4958e7\"" Jul 11 00:24:01.036126 kubelet[2379]: E0711 00:24:01.036061 2379 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:24:01.036345 kubelet[2379]: E0711 00:24:01.036309 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:01.037998 containerd[1555]: time="2025-07-11T00:24:01.037962959Z" level=info msg="CreateContainer within sandbox \"4e95a924b89e6140188f3b37b534cfca31991de8583553068257a382be4958e7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:24:01.146691 kubelet[2379]: I0711 00:24:01.146649 2379 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:24:01.147269 kubelet[2379]: E0711 00:24:01.147197 2379 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 11 00:24:01.177507 containerd[1555]: time="2025-07-11T00:24:01.177448832Z" level=info msg="Container 12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:01.183948 containerd[1555]: time="2025-07-11T00:24:01.183881391Z" level=info msg="Container d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:01.186879 containerd[1555]: time="2025-07-11T00:24:01.186829580Z" level=info msg="Container 8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:01.202031 containerd[1555]: time="2025-07-11T00:24:01.201959115Z" level=info msg="CreateContainer within sandbox \"17280565c45c363497acfee650e3dd8388cdbc4184cadbebbaf5887c81bf49e9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430\"" Jul 11 00:24:01.203058 containerd[1555]: time="2025-07-11T00:24:01.203017354Z" level=info msg="StartContainer for \"12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430\"" Jul 11 00:24:01.204666 containerd[1555]: time="2025-07-11T00:24:01.204623948Z" level=info msg="connecting to shim 12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430" address="unix:///run/containerd/s/5c4671cdc53ce84d5a0a00155403fed9322ec1aa06b1d13e3329b75b7e0e3463" protocol=ttrpc version=3 Jul 11 00:24:01.205017 containerd[1555]: time="2025-07-11T00:24:01.204954002Z" level=info msg="CreateContainer within sandbox \"6ed9caba56daf0c4cd9389c7e6a1dd039bfc627ac2824f471ce98fe088f88a56\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572\"" Jul 11 00:24:01.205600 containerd[1555]: time="2025-07-11T00:24:01.205558754Z" level=info msg="StartContainer for \"d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572\"" Jul 11 00:24:01.206537 containerd[1555]: time="2025-07-11T00:24:01.206495172Z" level=info msg="CreateContainer within sandbox \"4e95a924b89e6140188f3b37b534cfca31991de8583553068257a382be4958e7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7\"" Jul 11 00:24:01.206841 containerd[1555]: time="2025-07-11T00:24:01.206804837Z" level=info msg="connecting to shim d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572" address="unix:///run/containerd/s/e2e713069c34e5225a43da10351c1d29f6b2b88ff6bc328741aeb34512240b65" protocol=ttrpc version=3 Jul 11 00:24:01.207159 containerd[1555]: time="2025-07-11T00:24:01.207126475Z" level=info msg="StartContainer for \"8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7\"" Jul 11 00:24:01.209122 containerd[1555]: time="2025-07-11T00:24:01.208635605Z" level=info msg="connecting to shim 8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7" address="unix:///run/containerd/s/5d7c5a966659badc4f9af9946c9479e4544866dc6032096afbf9286454f2876f" protocol=ttrpc version=3 Jul 11 00:24:01.229262 systemd[1]: Started cri-containerd-12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430.scope - libcontainer container 12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430. Jul 11 00:24:01.235168 systemd[1]: Started cri-containerd-8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7.scope - libcontainer container 8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7. Jul 11 00:24:01.237484 systemd[1]: Started cri-containerd-d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572.scope - libcontainer container d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572. Jul 11 00:24:01.529776 containerd[1555]: time="2025-07-11T00:24:01.529714460Z" level=info msg="StartContainer for \"d07fedb572bb47a7cc79f76491e2ad18e07cd4f673e5e7bc1ab32f319e7b9572\" returns successfully" Jul 11 00:24:01.529980 containerd[1555]: time="2025-07-11T00:24:01.529905259Z" level=info msg="StartContainer for \"12ccda2a8c24a84af4fe735013957a85aa5bd6c7d2fcbc94a161ae2fed8c0430\" returns successfully" Jul 11 00:24:01.530769 containerd[1555]: time="2025-07-11T00:24:01.530691906Z" level=info msg="StartContainer for \"8105887e2a3296f31ed99d615b5897cdb483602d267d8a5561eb35a0c12d71f7\" returns successfully" Jul 11 00:24:02.160255 kubelet[2379]: E0711 00:24:02.160213 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:24:02.164178 kubelet[2379]: E0711 00:24:02.160350 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:02.168374 kubelet[2379]: E0711 00:24:02.168309 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:24:02.168542 kubelet[2379]: E0711 00:24:02.168522 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:02.175278 kubelet[2379]: E0711 00:24:02.175204 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:24:02.175506 kubelet[2379]: E0711 00:24:02.175472 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:02.751397 kubelet[2379]: I0711 00:24:02.751350 2379 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:24:02.962252 kubelet[2379]: E0711 00:24:02.962204 2379 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:24:03.041429 kubelet[2379]: I0711 00:24:03.041261 2379 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:24:03.041429 kubelet[2379]: E0711 00:24:03.041304 2379 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:24:03.054574 kubelet[2379]: E0711 00:24:03.054517 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:24:03.154713 kubelet[2379]: E0711 00:24:03.154619 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:24:03.172830 kubelet[2379]: E0711 00:24:03.172692 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:24:03.173527 kubelet[2379]: E0711 00:24:03.173017 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:03.173527 kubelet[2379]: E0711 00:24:03.173147 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:24:03.173527 kubelet[2379]: E0711 00:24:03.173315 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:03.174234 kubelet[2379]: E0711 00:24:03.174190 2379 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:24:03.174882 kubelet[2379]: E0711 00:24:03.174841 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:03.255453 kubelet[2379]: E0711 00:24:03.255376 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:24:03.358410 kubelet[2379]: E0711 00:24:03.357186 2379 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:24:03.419798 kubelet[2379]: I0711 00:24:03.419710 2379 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:03.426656 kubelet[2379]: E0711 00:24:03.426598 2379 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:03.426656 kubelet[2379]: I0711 00:24:03.426645 2379 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:03.429015 kubelet[2379]: E0711 00:24:03.428936 2379 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:03.429015 kubelet[2379]: I0711 00:24:03.428980 2379 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:03.431415 kubelet[2379]: E0711 00:24:03.431384 2379 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:04.097957 kubelet[2379]: I0711 00:24:04.097913 2379 apiserver.go:52] "Watching apiserver" Jul 11 00:24:04.120541 kubelet[2379]: I0711 00:24:04.120493 2379 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:24:04.173024 kubelet[2379]: I0711 00:24:04.172985 2379 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:04.173024 kubelet[2379]: I0711 00:24:04.173013 2379 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:04.173616 kubelet[2379]: I0711 00:24:04.173120 2379 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:04.226630 kubelet[2379]: E0711 00:24:04.226520 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:04.227337 kubelet[2379]: E0711 00:24:04.227308 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:04.227423 kubelet[2379]: E0711 00:24:04.227072 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:05.174505 kubelet[2379]: E0711 00:24:05.174459 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:05.174976 kubelet[2379]: E0711 00:24:05.174691 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:05.174976 kubelet[2379]: E0711 00:24:05.174922 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:08.392569 kubelet[2379]: E0711 00:24:08.392513 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:08.912056 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-9.scope)... Jul 11 00:24:08.912075 systemd[1]: Reloading... Jul 11 00:24:09.039149 zram_generator::config[2699]: No configuration found. Jul 11 00:24:09.202218 kubelet[2379]: I0711 00:24:09.201952 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.20192848 podStartE2EDuration="5.20192848s" podCreationTimestamp="2025-07-11 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:09.200993018 +0000 UTC m=+10.742034131" watchObservedRunningTime="2025-07-11 00:24:09.20192848 +0000 UTC m=+10.742969583" Jul 11 00:24:09.210622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:24:09.223234 kubelet[2379]: I0711 00:24:09.223124 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.22307737 podStartE2EDuration="5.22307737s" podCreationTimestamp="2025-07-11 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:09.222901098 +0000 UTC m=+10.763942201" watchObservedRunningTime="2025-07-11 00:24:09.22307737 +0000 UTC m=+10.764118493" Jul 11 00:24:09.223455 kubelet[2379]: I0711 00:24:09.223246 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.223238243 podStartE2EDuration="5.223238243s" podCreationTimestamp="2025-07-11 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:09.211813066 +0000 UTC m=+10.752854179" watchObservedRunningTime="2025-07-11 00:24:09.223238243 +0000 UTC m=+10.764279376" Jul 11 00:24:09.380338 systemd[1]: Reloading finished in 467 ms. Jul 11 00:24:09.418258 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:24:09.443514 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:24:09.443927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:24:09.444007 systemd[1]: kubelet.service: Consumed 1.285s CPU time, 132.1M memory peak. Jul 11 00:24:09.446584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:24:09.717625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:24:09.732291 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:24:09.859202 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:24:09.859202 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:24:09.859202 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:24:09.859202 kubelet[2742]: I0711 00:24:09.858911 2742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:24:09.869710 kubelet[2742]: I0711 00:24:09.869609 2742 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:24:09.869710 kubelet[2742]: I0711 00:24:09.869665 2742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:24:09.870045 kubelet[2742]: I0711 00:24:09.870005 2742 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:24:09.871768 kubelet[2742]: I0711 00:24:09.871729 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:24:09.876470 kubelet[2742]: I0711 00:24:09.876433 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:24:09.882252 kubelet[2742]: I0711 00:24:09.882196 2742 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:24:09.889107 kubelet[2742]: I0711 00:24:09.889052 2742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:24:09.889429 kubelet[2742]: I0711 00:24:09.889370 2742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:24:09.889726 kubelet[2742]: I0711 00:24:09.889413 2742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:24:09.889878 kubelet[2742]: I0711 00:24:09.889758 2742 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:24:09.889878 kubelet[2742]: I0711 00:24:09.889774 2742 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:24:09.889878 kubelet[2742]: I0711 00:24:09.889835 2742 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:24:09.890126 kubelet[2742]: I0711 00:24:09.890054 2742 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:24:09.890126 kubelet[2742]: I0711 00:24:09.890106 2742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:24:09.890205 kubelet[2742]: I0711 00:24:09.890138 2742 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:24:09.890205 kubelet[2742]: I0711 00:24:09.890155 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:24:09.891313 kubelet[2742]: I0711 00:24:09.891254 2742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:24:09.891906 kubelet[2742]: I0711 00:24:09.891872 2742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:24:09.892472 kubelet[2742]: I0711 00:24:09.892435 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:24:09.892537 kubelet[2742]: I0711 00:24:09.892477 2742 server.go:1287] "Started kubelet" Jul 11 00:24:09.895055 kubelet[2742]: I0711 00:24:09.895016 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:24:09.895901 kubelet[2742]: I0711 00:24:09.892954 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:24:09.898115 kubelet[2742]: I0711 00:24:09.896659 2742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:24:09.898115 kubelet[2742]: I0711 00:24:09.897476 2742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:24:09.902269 kubelet[2742]: I0711 00:24:09.901555 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:24:09.906136 kubelet[2742]: I0711 00:24:09.905378 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:24:09.906136 kubelet[2742]: I0711 00:24:09.905812 2742 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:24:09.906136 kubelet[2742]: I0711 00:24:09.906011 2742 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:24:09.907532 kubelet[2742]: E0711 00:24:09.907144 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:24:09.914483 kubelet[2742]: I0711 00:24:09.914427 2742 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:24:09.915205 kubelet[2742]: I0711 00:24:09.915166 2742 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:24:09.915305 kubelet[2742]: I0711 00:24:09.915281 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:24:09.915767 kubelet[2742]: I0711 00:24:09.915692 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:24:09.918167 kubelet[2742]: I0711 00:24:09.918117 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:24:09.918353 kubelet[2742]: I0711 00:24:09.918281 2742 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:24:09.919211 kubelet[2742]: E0711 00:24:09.918546 2742 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:24:09.922476 kubelet[2742]: I0711 00:24:09.918285 2742 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:24:09.922589 kubelet[2742]: I0711 00:24:09.922485 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:24:09.922589 kubelet[2742]: I0711 00:24:09.922496 2742 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:24:09.922677 kubelet[2742]: E0711 00:24:09.922569 2742 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:24:09.963627 kubelet[2742]: I0711 00:24:09.963559 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:24:09.963627 kubelet[2742]: I0711 00:24:09.963580 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:24:09.963627 kubelet[2742]: I0711 00:24:09.963600 2742 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:24:09.963987 kubelet[2742]: I0711 00:24:09.963780 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:24:09.963987 kubelet[2742]: I0711 00:24:09.963793 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:24:09.963987 kubelet[2742]: I0711 00:24:09.963813 2742 policy_none.go:49] "None policy: Start" Jul 11 00:24:09.963987 kubelet[2742]: I0711 00:24:09.963822 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:24:09.963987 kubelet[2742]: I0711 00:24:09.963832 2742 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:24:09.963987 kubelet[2742]: I0711 00:24:09.963927 2742 state_mem.go:75] "Updated machine memory state" Jul 11 00:24:09.979329 kubelet[2742]: I0711 00:24:09.979227 2742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:24:09.979734 kubelet[2742]: I0711 00:24:09.979714 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:24:09.979855 kubelet[2742]: I0711 00:24:09.979809 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:24:09.980359 kubelet[2742]: I0711 00:24:09.980340 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:24:09.984590 kubelet[2742]: E0711 00:24:09.984554 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:24:09.988468 sudo[2774]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:24:09.988922 sudo[2774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:24:10.024151 kubelet[2742]: I0711 00:24:10.024020 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:10.024805 kubelet[2742]: I0711 00:24:10.024774 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:10.024918 kubelet[2742]: I0711 00:24:10.024855 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.073179 kubelet[2742]: E0711 00:24:10.073116 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:10.074681 kubelet[2742]: E0711 00:24:10.074610 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:10.074758 kubelet[2742]: E0711 00:24:10.074724 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.087019 kubelet[2742]: I0711 00:24:10.086718 2742 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:24:10.107901 kubelet[2742]: I0711 00:24:10.107820 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.107901 kubelet[2742]: I0711 00:24:10.107874 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.107901 kubelet[2742]: I0711 00:24:10.107900 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.107901 kubelet[2742]: I0711 00:24:10.107920 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:10.108222 kubelet[2742]: I0711 00:24:10.107939 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7d19f97d12cf9a4182298820446bd1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b7d19f97d12cf9a4182298820446bd1\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:10.108222 kubelet[2742]: I0711 00:24:10.107966 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7d19f97d12cf9a4182298820446bd1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b7d19f97d12cf9a4182298820446bd1\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:10.108222 kubelet[2742]: I0711 00:24:10.107986 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7d19f97d12cf9a4182298820446bd1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b7d19f97d12cf9a4182298820446bd1\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:10.108222 kubelet[2742]: I0711 00:24:10.108007 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.108222 kubelet[2742]: I0711 00:24:10.108024 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:24:10.146362 kubelet[2742]: I0711 00:24:10.146304 2742 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:24:10.146529 kubelet[2742]: I0711 00:24:10.146417 2742 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:24:10.373906 kubelet[2742]: E0711 00:24:10.373861 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:10.375979 kubelet[2742]: E0711 00:24:10.375954 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:10.376207 kubelet[2742]: E0711 00:24:10.376176 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:10.516007 sudo[2774]: pam_unix(sudo:session): session closed for user root Jul 11 00:24:10.891732 kubelet[2742]: I0711 00:24:10.891663 2742 apiserver.go:52] "Watching apiserver" Jul 11 00:24:10.906766 kubelet[2742]: I0711 00:24:10.906693 2742 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:24:10.938879 kubelet[2742]: I0711 00:24:10.938797 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:10.939044 kubelet[2742]: E0711 00:24:10.938982 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:10.939153 kubelet[2742]: I0711 00:24:10.939098 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:11.348054 kubelet[2742]: E0711 00:24:11.347400 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:24:11.348054 kubelet[2742]: E0711 00:24:11.347691 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:11.348433 kubelet[2742]: E0711 00:24:11.348381 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:24:11.348635 kubelet[2742]: E0711 00:24:11.348594 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:11.941058 kubelet[2742]: E0711 00:24:11.941012 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:11.941058 kubelet[2742]: E0711 00:24:11.941023 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:12.943786 kubelet[2742]: E0711 00:24:12.943727 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:13.655124 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 11 00:24:13.658203 sshd[1778]: Connection closed by 10.0.0.1 port 51294 Jul 11 00:24:13.659635 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:13.665377 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:51294.service: Deactivated successfully. Jul 11 00:24:13.668064 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:24:13.668365 systemd[1]: session-9.scope: Consumed 5.276s CPU time, 259.6M memory peak. Jul 11 00:24:13.670255 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:24:13.673316 systemd-logind[1534]: Removed session 9. Jul 11 00:24:13.946244 kubelet[2742]: E0711 00:24:13.944782 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:14.556867 kubelet[2742]: I0711 00:24:14.556810 2742 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:24:14.557364 containerd[1555]: time="2025-07-11T00:24:14.557315706Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:24:14.557804 kubelet[2742]: I0711 00:24:14.557548 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:24:14.946459 kubelet[2742]: E0711 00:24:14.946307 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:16.321116 kubelet[2742]: E0711 00:24:16.321052 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:16.899102 kubelet[2742]: E0711 00:24:16.899013 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:16.923859 systemd[1]: Created slice kubepods-burstable-pod6c8991f2_517d_42a1_b3ce_0350097a6c28.slice - libcontainer container kubepods-burstable-pod6c8991f2_517d_42a1_b3ce_0350097a6c28.slice. Jul 11 00:24:16.930841 systemd[1]: Created slice kubepods-besteffort-pod7c9522c5_412f_4b72_8d6c_66eb1b3c12c9.slice - libcontainer container kubepods-besteffort-pod7c9522c5_412f_4b72_8d6c_66eb1b3c12c9.slice. Jul 11 00:24:16.949110 kubelet[2742]: I0711 00:24:16.948763 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c9522c5-412f-4b72-8d6c-66eb1b3c12c9-kube-proxy\") pod \"kube-proxy-mj6fn\" (UID: \"7c9522c5-412f-4b72-8d6c-66eb1b3c12c9\") " pod="kube-system/kube-proxy-mj6fn" Jul 11 00:24:16.949110 kubelet[2742]: I0711 00:24:16.948820 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-hostproc\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949110 kubelet[2742]: I0711 00:24:16.948848 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztbg\" (UniqueName: \"kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-kube-api-access-kztbg\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949110 kubelet[2742]: I0711 00:24:16.948872 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmhq\" (UniqueName: \"kubernetes.io/projected/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-kube-api-access-xqmhq\") pod \"cilium-operator-6c4d7847fc-bcfsv\" (UID: \"1e7f3743-8a45-4afb-a758-1ec2ac4384ab\") " pod="kube-system/cilium-operator-6c4d7847fc-bcfsv" Jul 11 00:24:16.949110 kubelet[2742]: I0711 00:24:16.948897 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-etc-cni-netd\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949403 kubelet[2742]: I0711 00:24:16.948915 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-xtables-lock\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949403 kubelet[2742]: I0711 00:24:16.948934 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-config-path\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949403 kubelet[2742]: I0711 00:24:16.948954 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c9522c5-412f-4b72-8d6c-66eb1b3c12c9-xtables-lock\") pod \"kube-proxy-mj6fn\" (UID: \"7c9522c5-412f-4b72-8d6c-66eb1b3c12c9\") " pod="kube-system/kube-proxy-mj6fn" Jul 11 00:24:16.949403 kubelet[2742]: I0711 00:24:16.948976 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvq2c\" (UniqueName: \"kubernetes.io/projected/7c9522c5-412f-4b72-8d6c-66eb1b3c12c9-kube-api-access-pvq2c\") pod \"kube-proxy-mj6fn\" (UID: \"7c9522c5-412f-4b72-8d6c-66eb1b3c12c9\") " pod="kube-system/kube-proxy-mj6fn" Jul 11 00:24:16.949403 kubelet[2742]: I0711 00:24:16.949027 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c8991f2-517d-42a1-b3ce-0350097a6c28-clustermesh-secrets\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949702 kubelet[2742]: I0711 00:24:16.949066 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bcfsv\" (UID: \"1e7f3743-8a45-4afb-a758-1ec2ac4384ab\") " pod="kube-system/cilium-operator-6c4d7847fc-bcfsv" Jul 11 00:24:16.949702 kubelet[2742]: I0711 00:24:16.949123 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-run\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949702 kubelet[2742]: I0711 00:24:16.949145 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-cgroup\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949702 kubelet[2742]: I0711 00:24:16.949183 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c9522c5-412f-4b72-8d6c-66eb1b3c12c9-lib-modules\") pod \"kube-proxy-mj6fn\" (UID: \"7c9522c5-412f-4b72-8d6c-66eb1b3c12c9\") " pod="kube-system/kube-proxy-mj6fn" Jul 11 00:24:16.949702 kubelet[2742]: I0711 00:24:16.949203 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cni-path\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949866 kubelet[2742]: I0711 00:24:16.949223 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-lib-modules\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949866 kubelet[2742]: I0711 00:24:16.949271 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-hubble-tls\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949866 kubelet[2742]: I0711 00:24:16.949308 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-bpf-maps\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949866 kubelet[2742]: I0711 00:24:16.949327 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-kernel\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.949866 kubelet[2742]: I0711 00:24:16.949357 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-net\") pod \"cilium-47bf2\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " pod="kube-system/cilium-47bf2" Jul 11 00:24:16.951465 systemd[1]: Created slice kubepods-besteffort-pod1e7f3743_8a45_4afb_a758_1ec2ac4384ab.slice - libcontainer container kubepods-besteffort-pod1e7f3743_8a45_4afb_a758_1ec2ac4384ab.slice. Jul 11 00:24:16.951840 kubelet[2742]: E0711 00:24:16.951822 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:16.952111 kubelet[2742]: E0711 00:24:16.952096 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.527652 kubelet[2742]: E0711 00:24:17.527610 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.528472 containerd[1555]: time="2025-07-11T00:24:17.528425543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47bf2,Uid:6c8991f2-517d-42a1-b3ce-0350097a6c28,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:17.547699 kubelet[2742]: E0711 00:24:17.547655 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.550159 containerd[1555]: time="2025-07-11T00:24:17.550119991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mj6fn,Uid:7c9522c5-412f-4b72-8d6c-66eb1b3c12c9,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:17.554711 kubelet[2742]: E0711 00:24:17.554676 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.555232 containerd[1555]: time="2025-07-11T00:24:17.555182420Z" level=info msg="connecting to shim f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57" address="unix:///run/containerd/s/45028f7a9d86f01eaea62f1eab98620f1acb1718fa8d34578eb2c92f029a0edc" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:17.557474 containerd[1555]: time="2025-07-11T00:24:17.557438290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcfsv,Uid:1e7f3743-8a45-4afb-a758-1ec2ac4384ab,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:17.595781 systemd[1]: Started cri-containerd-f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57.scope - libcontainer container f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57. Jul 11 00:24:17.658648 containerd[1555]: time="2025-07-11T00:24:17.658587213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47bf2,Uid:6c8991f2-517d-42a1-b3ce-0350097a6c28,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\"" Jul 11 00:24:17.659770 kubelet[2742]: E0711 00:24:17.659728 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.661490 containerd[1555]: time="2025-07-11T00:24:17.661397156Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:24:17.672764 containerd[1555]: time="2025-07-11T00:24:17.672708182Z" level=info msg="connecting to shim 097783ea941e60b9fd176d9e475e42919da9341bfa9e767d5081b7edf0574a89" address="unix:///run/containerd/s/70ebfb8becfed0ef1c0a063ea2cdbdb988efa6ade06a3baeda7c6549a14259ad" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:17.680912 containerd[1555]: time="2025-07-11T00:24:17.680848297Z" level=info msg="connecting to shim 6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1" address="unix:///run/containerd/s/9c742b3ef4c7cc1e490300a669c6fcdbd149f0b243829c81673762bc510d444d" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:17.712300 systemd[1]: Started cri-containerd-097783ea941e60b9fd176d9e475e42919da9341bfa9e767d5081b7edf0574a89.scope - libcontainer container 097783ea941e60b9fd176d9e475e42919da9341bfa9e767d5081b7edf0574a89. Jul 11 00:24:17.716058 systemd[1]: Started cri-containerd-6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1.scope - libcontainer container 6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1. Jul 11 00:24:17.770179 containerd[1555]: time="2025-07-11T00:24:17.770111082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mj6fn,Uid:7c9522c5-412f-4b72-8d6c-66eb1b3c12c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"097783ea941e60b9fd176d9e475e42919da9341bfa9e767d5081b7edf0574a89\"" Jul 11 00:24:17.771009 kubelet[2742]: E0711 00:24:17.770984 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.776140 containerd[1555]: time="2025-07-11T00:24:17.776029679Z" level=info msg="CreateContainer within sandbox \"097783ea941e60b9fd176d9e475e42919da9341bfa9e767d5081b7edf0574a89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:24:17.778581 containerd[1555]: time="2025-07-11T00:24:17.778447846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcfsv,Uid:1e7f3743-8a45-4afb-a758-1ec2ac4384ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\"" Jul 11 00:24:17.779517 kubelet[2742]: E0711 00:24:17.779373 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.793785 containerd[1555]: time="2025-07-11T00:24:17.793726572Z" level=info msg="Container 2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:17.812149 containerd[1555]: time="2025-07-11T00:24:17.812076441Z" level=info msg="CreateContainer within sandbox \"097783ea941e60b9fd176d9e475e42919da9341bfa9e767d5081b7edf0574a89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39\"" Jul 11 00:24:17.812974 containerd[1555]: time="2025-07-11T00:24:17.812896133Z" level=info msg="StartContainer for \"2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39\"" Jul 11 00:24:17.815288 containerd[1555]: time="2025-07-11T00:24:17.815155210Z" level=info msg="connecting to shim 2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39" address="unix:///run/containerd/s/70ebfb8becfed0ef1c0a063ea2cdbdb988efa6ade06a3baeda7c6549a14259ad" protocol=ttrpc version=3 Jul 11 00:24:17.852504 systemd[1]: Started cri-containerd-2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39.scope - libcontainer container 2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39. Jul 11 00:24:17.941569 containerd[1555]: time="2025-07-11T00:24:17.941502348Z" level=info msg="StartContainer for \"2e735e341b0bd34b080e3815c16817c9deab80c2fae6d168470e4f02ce9f3a39\" returns successfully" Jul 11 00:24:17.965112 kubelet[2742]: E0711 00:24:17.965009 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:17.971112 kubelet[2742]: E0711 00:24:17.970620 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:19.951634 kubelet[2742]: I0711 00:24:19.951237 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mj6fn" podStartSLOduration=4.951212953 podStartE2EDuration="4.951212953s" podCreationTimestamp="2025-07-11 00:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:17.987387387 +0000 UTC m=+8.195153170" watchObservedRunningTime="2025-07-11 00:24:19.951212953 +0000 UTC m=+10.158978736" Jul 11 00:24:29.239634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152762955.mount: Deactivated successfully. Jul 11 00:24:33.664130 containerd[1555]: time="2025-07-11T00:24:33.663984687Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:33.722569 containerd[1555]: time="2025-07-11T00:24:33.722462212Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 11 00:24:33.769716 containerd[1555]: time="2025-07-11T00:24:33.769623035Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:33.771496 containerd[1555]: time="2025-07-11T00:24:33.771444945Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.109967007s" Jul 11 00:24:33.771496 containerd[1555]: time="2025-07-11T00:24:33.771489929Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 11 00:24:33.776136 containerd[1555]: time="2025-07-11T00:24:33.776044138Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:24:33.780073 containerd[1555]: time="2025-07-11T00:24:33.780039057Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:24:33.949315 containerd[1555]: time="2025-07-11T00:24:33.948890299Z" level=info msg="Container 432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:33.965227 containerd[1555]: time="2025-07-11T00:24:33.965027579Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\"" Jul 11 00:24:33.965768 containerd[1555]: time="2025-07-11T00:24:33.965732673Z" level=info msg="StartContainer for \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\"" Jul 11 00:24:33.966755 containerd[1555]: time="2025-07-11T00:24:33.966717021Z" level=info msg="connecting to shim 432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94" address="unix:///run/containerd/s/45028f7a9d86f01eaea62f1eab98620f1acb1718fa8d34578eb2c92f029a0edc" protocol=ttrpc version=3 Jul 11 00:24:33.996298 systemd[1]: Started cri-containerd-432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94.scope - libcontainer container 432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94. Jul 11 00:24:34.044913 systemd[1]: cri-containerd-432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94.scope: Deactivated successfully. Jul 11 00:24:34.046598 containerd[1555]: time="2025-07-11T00:24:34.046535159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" id:\"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" pid:3164 exited_at:{seconds:1752193474 nanos:45929352}" Jul 11 00:24:34.055732 containerd[1555]: time="2025-07-11T00:24:34.055679783Z" level=info msg="received exit event container_id:\"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" id:\"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" pid:3164 exited_at:{seconds:1752193474 nanos:45929352}" Jul 11 00:24:34.056648 containerd[1555]: time="2025-07-11T00:24:34.056625859Z" level=info msg="StartContainer for \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" returns successfully" Jul 11 00:24:34.080488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94-rootfs.mount: Deactivated successfully. Jul 11 00:24:35.059964 kubelet[2742]: E0711 00:24:35.059925 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:35.062262 containerd[1555]: time="2025-07-11T00:24:35.062116405Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:24:35.149582 containerd[1555]: time="2025-07-11T00:24:35.149432445Z" level=info msg="Container 16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:35.153917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040160708.mount: Deactivated successfully. Jul 11 00:24:35.157765 containerd[1555]: time="2025-07-11T00:24:35.157709519Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\"" Jul 11 00:24:35.158390 containerd[1555]: time="2025-07-11T00:24:35.158355582Z" level=info msg="StartContainer for \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\"" Jul 11 00:24:35.160900 containerd[1555]: time="2025-07-11T00:24:35.160838061Z" level=info msg="connecting to shim 16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262" address="unix:///run/containerd/s/45028f7a9d86f01eaea62f1eab98620f1acb1718fa8d34578eb2c92f029a0edc" protocol=ttrpc version=3 Jul 11 00:24:35.185332 systemd[1]: Started cri-containerd-16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262.scope - libcontainer container 16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262. Jul 11 00:24:35.226657 containerd[1555]: time="2025-07-11T00:24:35.226600397Z" level=info msg="StartContainer for \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" returns successfully" Jul 11 00:24:35.243790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:24:35.244158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:24:35.244490 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:24:35.246793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:24:35.248494 containerd[1555]: time="2025-07-11T00:24:35.248440298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" id:\"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" pid:3212 exited_at:{seconds:1752193475 nanos:247953303}" Jul 11 00:24:35.248698 containerd[1555]: time="2025-07-11T00:24:35.248663576Z" level=info msg="received exit event container_id:\"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" id:\"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" pid:3212 exited_at:{seconds:1752193475 nanos:247953303}" Jul 11 00:24:35.250814 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:24:35.251444 systemd[1]: cri-containerd-16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262.scope: Deactivated successfully. Jul 11 00:24:35.272067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262-rootfs.mount: Deactivated successfully. Jul 11 00:24:35.298709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:24:36.062957 kubelet[2742]: E0711 00:24:36.062914 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:36.064996 containerd[1555]: time="2025-07-11T00:24:36.064959245Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:24:36.290064 containerd[1555]: time="2025-07-11T00:24:36.289996454Z" level=info msg="Container 6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:36.295237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897701667.mount: Deactivated successfully. Jul 11 00:24:36.310489 containerd[1555]: time="2025-07-11T00:24:36.310424984Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\"" Jul 11 00:24:36.311039 containerd[1555]: time="2025-07-11T00:24:36.311005783Z" level=info msg="StartContainer for \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\"" Jul 11 00:24:36.312883 containerd[1555]: time="2025-07-11T00:24:36.312839465Z" level=info msg="connecting to shim 6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926" address="unix:///run/containerd/s/45028f7a9d86f01eaea62f1eab98620f1acb1718fa8d34578eb2c92f029a0edc" protocol=ttrpc version=3 Jul 11 00:24:36.338400 systemd[1]: Started cri-containerd-6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926.scope - libcontainer container 6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926. Jul 11 00:24:36.387702 systemd[1]: cri-containerd-6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926.scope: Deactivated successfully. Jul 11 00:24:36.389135 containerd[1555]: time="2025-07-11T00:24:36.389043993Z" level=info msg="received exit event container_id:\"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" id:\"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" pid:3260 exited_at:{seconds:1752193476 nanos:388795425}" Jul 11 00:24:36.389135 containerd[1555]: time="2025-07-11T00:24:36.389127459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" id:\"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" pid:3260 exited_at:{seconds:1752193476 nanos:388795425}" Jul 11 00:24:36.389440 containerd[1555]: time="2025-07-11T00:24:36.389048521Z" level=info msg="StartContainer for \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" returns successfully" Jul 11 00:24:36.416391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926-rootfs.mount: Deactivated successfully. Jul 11 00:24:37.068189 kubelet[2742]: E0711 00:24:37.068147 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:37.070646 containerd[1555]: time="2025-07-11T00:24:37.070567209Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:24:37.345620 containerd[1555]: time="2025-07-11T00:24:37.345496921Z" level=info msg="Container a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:37.350406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187685102.mount: Deactivated successfully. Jul 11 00:24:37.767613 containerd[1555]: time="2025-07-11T00:24:37.767445810Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\"" Jul 11 00:24:37.768450 containerd[1555]: time="2025-07-11T00:24:37.768359704Z" level=info msg="StartContainer for \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\"" Jul 11 00:24:37.769676 containerd[1555]: time="2025-07-11T00:24:37.769609661Z" level=info msg="connecting to shim a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de" address="unix:///run/containerd/s/45028f7a9d86f01eaea62f1eab98620f1acb1718fa8d34578eb2c92f029a0edc" protocol=ttrpc version=3 Jul 11 00:24:37.797288 systemd[1]: Started cri-containerd-a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de.scope - libcontainer container a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de. Jul 11 00:24:37.828693 systemd[1]: cri-containerd-a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de.scope: Deactivated successfully. Jul 11 00:24:37.829369 containerd[1555]: time="2025-07-11T00:24:37.828776176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" id:\"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" pid:3316 exited_at:{seconds:1752193477 nanos:828464161}" Jul 11 00:24:37.969654 containerd[1555]: time="2025-07-11T00:24:37.969515597Z" level=info msg="received exit event container_id:\"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" id:\"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" pid:3316 exited_at:{seconds:1752193477 nanos:828464161}" Jul 11 00:24:37.974531 containerd[1555]: time="2025-07-11T00:24:37.974473241Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:37.979801 containerd[1555]: time="2025-07-11T00:24:37.979723864Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 11 00:24:37.980796 containerd[1555]: time="2025-07-11T00:24:37.980657507Z" level=info msg="StartContainer for \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" returns successfully" Jul 11 00:24:37.997452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de-rootfs.mount: Deactivated successfully. Jul 11 00:24:38.002812 containerd[1555]: time="2025-07-11T00:24:38.002732433Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:38.004582 containerd[1555]: time="2025-07-11T00:24:38.004536279Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.228450573s" Jul 11 00:24:38.004654 containerd[1555]: time="2025-07-11T00:24:38.004582065Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 11 00:24:38.007910 containerd[1555]: time="2025-07-11T00:24:38.007841400Z" level=info msg="CreateContainer within sandbox \"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:24:38.074181 kubelet[2742]: E0711 00:24:38.074122 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:39.138112 kubelet[2742]: E0711 00:24:39.137941 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:39.141106 containerd[1555]: time="2025-07-11T00:24:39.140116863Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:24:39.178480 containerd[1555]: time="2025-07-11T00:24:39.178403427Z" level=info msg="Container 932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:39.392352 containerd[1555]: time="2025-07-11T00:24:39.392187566Z" level=info msg="CreateContainer within sandbox \"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\"" Jul 11 00:24:39.392805 containerd[1555]: time="2025-07-11T00:24:39.392741835Z" level=info msg="StartContainer for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\"" Jul 11 00:24:39.394104 containerd[1555]: time="2025-07-11T00:24:39.393849645Z" level=info msg="connecting to shim 932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e" address="unix:///run/containerd/s/9c742b3ef4c7cc1e490300a669c6fcdbd149f0b243829c81673762bc510d444d" protocol=ttrpc version=3 Jul 11 00:24:39.394688 containerd[1555]: time="2025-07-11T00:24:39.394662700Z" level=info msg="Container 6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:39.413484 containerd[1555]: time="2025-07-11T00:24:39.413433583Z" level=info msg="CreateContainer within sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\"" Jul 11 00:24:39.414940 containerd[1555]: time="2025-07-11T00:24:39.413877437Z" level=info msg="StartContainer for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\"" Jul 11 00:24:39.414940 containerd[1555]: time="2025-07-11T00:24:39.414940391Z" level=info msg="connecting to shim 6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a" address="unix:///run/containerd/s/45028f7a9d86f01eaea62f1eab98620f1acb1718fa8d34578eb2c92f029a0edc" protocol=ttrpc version=3 Jul 11 00:24:39.416619 systemd[1]: Started cri-containerd-932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e.scope - libcontainer container 932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e. Jul 11 00:24:39.438272 systemd[1]: Started cri-containerd-6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a.scope - libcontainer container 6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a. Jul 11 00:24:39.475280 containerd[1555]: time="2025-07-11T00:24:39.475215808Z" level=info msg="StartContainer for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" returns successfully" Jul 11 00:24:39.497369 containerd[1555]: time="2025-07-11T00:24:39.497301072Z" level=info msg="StartContainer for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" returns successfully" Jul 11 00:24:39.595839 containerd[1555]: time="2025-07-11T00:24:39.594884027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" id:\"fb254c711d4a1e917ddbb5d340cdd0b2180e5860586f7932068011a9a2c10068\" pid:3419 exited_at:{seconds:1752193479 nanos:594120695}" Jul 11 00:24:39.627531 kubelet[2742]: I0711 00:24:39.627448 2742 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:24:39.709378 systemd[1]: Created slice kubepods-burstable-podf38c26a7_82e7_4119_a852_61d96183e813.slice - libcontainer container kubepods-burstable-podf38c26a7_82e7_4119_a852_61d96183e813.slice. Jul 11 00:24:39.715553 systemd[1]: Created slice kubepods-burstable-podd5f33022_2bdd_47c4_8532_b403e086f587.slice - libcontainer container kubepods-burstable-podd5f33022_2bdd_47c4_8532_b403e086f587.slice. Jul 11 00:24:39.804198 kubelet[2742]: I0711 00:24:39.803945 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7f7r\" (UniqueName: \"kubernetes.io/projected/f38c26a7-82e7-4119-a852-61d96183e813-kube-api-access-t7f7r\") pod \"coredns-668d6bf9bc-ntn8w\" (UID: \"f38c26a7-82e7-4119-a852-61d96183e813\") " pod="kube-system/coredns-668d6bf9bc-ntn8w" Jul 11 00:24:39.804198 kubelet[2742]: I0711 00:24:39.804037 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f38c26a7-82e7-4119-a852-61d96183e813-config-volume\") pod \"coredns-668d6bf9bc-ntn8w\" (UID: \"f38c26a7-82e7-4119-a852-61d96183e813\") " pod="kube-system/coredns-668d6bf9bc-ntn8w" Jul 11 00:24:39.804198 kubelet[2742]: I0711 00:24:39.804066 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5f33022-2bdd-47c4-8532-b403e086f587-config-volume\") pod \"coredns-668d6bf9bc-c7z6n\" (UID: \"d5f33022-2bdd-47c4-8532-b403e086f587\") " pod="kube-system/coredns-668d6bf9bc-c7z6n" Jul 11 00:24:39.804198 kubelet[2742]: I0711 00:24:39.804112 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gx4m\" (UniqueName: \"kubernetes.io/projected/d5f33022-2bdd-47c4-8532-b403e086f587-kube-api-access-7gx4m\") pod \"coredns-668d6bf9bc-c7z6n\" (UID: \"d5f33022-2bdd-47c4-8532-b403e086f587\") " pod="kube-system/coredns-668d6bf9bc-c7z6n" Jul 11 00:24:40.143548 kubelet[2742]: E0711 00:24:40.143498 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:40.149639 kubelet[2742]: E0711 00:24:40.149591 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:40.313885 kubelet[2742]: E0711 00:24:40.313827 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:40.314658 containerd[1555]: time="2025-07-11T00:24:40.314604577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ntn8w,Uid:f38c26a7-82e7-4119-a852-61d96183e813,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:40.319227 kubelet[2742]: E0711 00:24:40.319203 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:40.319677 containerd[1555]: time="2025-07-11T00:24:40.319651306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7z6n,Uid:d5f33022-2bdd-47c4-8532-b403e086f587,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:40.652622 kubelet[2742]: I0711 00:24:40.652188 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-47bf2" podStartSLOduration=9.536720005 podStartE2EDuration="25.65214492s" podCreationTimestamp="2025-07-11 00:24:15 +0000 UTC" firstStartedPulling="2025-07-11 00:24:17.660462559 +0000 UTC m=+7.868228342" lastFinishedPulling="2025-07-11 00:24:33.775887474 +0000 UTC m=+23.983653257" observedRunningTime="2025-07-11 00:24:40.651887467 +0000 UTC m=+30.859653250" watchObservedRunningTime="2025-07-11 00:24:40.65214492 +0000 UTC m=+30.859910703" Jul 11 00:24:41.151312 kubelet[2742]: E0711 00:24:41.151260 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:41.151810 kubelet[2742]: E0711 00:24:41.151414 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:41.647119 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:38422.service - OpenSSH per-connection server daemon (10.0.0.1:38422). Jul 11 00:24:41.720024 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 38422 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:41.721878 sshd-session[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:41.728623 systemd-logind[1534]: New session 10 of user core. Jul 11 00:24:41.737402 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:24:41.989335 sshd[3520]: Connection closed by 10.0.0.1 port 38422 Jul 11 00:24:41.990857 sshd-session[3518]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:41.996163 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:38422.service: Deactivated successfully. Jul 11 00:24:41.999595 systemd-networkd[1474]: cilium_host: Link UP Jul 11 00:24:41.999871 systemd-networkd[1474]: cilium_net: Link UP Jul 11 00:24:42.000200 systemd-networkd[1474]: cilium_net: Gained carrier Jul 11 00:24:42.000328 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:24:42.000451 systemd-networkd[1474]: cilium_host: Gained carrier Jul 11 00:24:42.002577 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:24:42.005582 systemd-logind[1534]: Removed session 10. Jul 11 00:24:42.139621 systemd-networkd[1474]: cilium_vxlan: Link UP Jul 11 00:24:42.139634 systemd-networkd[1474]: cilium_vxlan: Gained carrier Jul 11 00:24:42.153887 kubelet[2742]: E0711 00:24:42.153833 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:42.343238 systemd-networkd[1474]: cilium_net: Gained IPv6LL Jul 11 00:24:42.390121 kernel: NET: Registered PF_ALG protocol family Jul 11 00:24:42.879412 systemd-networkd[1474]: cilium_host: Gained IPv6LL Jul 11 00:24:43.152364 systemd-networkd[1474]: lxc_health: Link UP Jul 11 00:24:43.162341 systemd-networkd[1474]: lxc_health: Gained carrier Jul 11 00:24:43.390885 systemd-networkd[1474]: lxc158f7dd8690f: Link UP Jul 11 00:24:43.404138 kernel: eth0: renamed from tmp12e09 Jul 11 00:24:43.404286 kernel: eth0: renamed from tmp0bbe7 Jul 11 00:24:43.407637 systemd-networkd[1474]: lxc1197184ecc57: Link UP Jul 11 00:24:43.409290 systemd-networkd[1474]: lxc158f7dd8690f: Gained carrier Jul 11 00:24:43.411705 systemd-networkd[1474]: lxc1197184ecc57: Gained carrier Jul 11 00:24:43.531124 kubelet[2742]: E0711 00:24:43.529695 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:43.550060 kubelet[2742]: I0711 00:24:43.549378 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bcfsv" podStartSLOduration=7.324126188 podStartE2EDuration="27.549359358s" podCreationTimestamp="2025-07-11 00:24:16 +0000 UTC" firstStartedPulling="2025-07-11 00:24:17.780538637 +0000 UTC m=+7.988304420" lastFinishedPulling="2025-07-11 00:24:38.005771797 +0000 UTC m=+28.213537590" observedRunningTime="2025-07-11 00:24:40.838496136 +0000 UTC m=+31.046261919" watchObservedRunningTime="2025-07-11 00:24:43.549359358 +0000 UTC m=+33.757125141" Jul 11 00:24:43.776347 systemd-networkd[1474]: cilium_vxlan: Gained IPv6LL Jul 11 00:24:44.160618 kubelet[2742]: E0711 00:24:44.160566 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:44.607433 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jul 11 00:24:44.799359 systemd-networkd[1474]: lxc1197184ecc57: Gained IPv6LL Jul 11 00:24:45.161948 kubelet[2742]: E0711 00:24:45.161896 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:45.183336 systemd-networkd[1474]: lxc158f7dd8690f: Gained IPv6LL Jul 11 00:24:47.008857 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:38430.service - OpenSSH per-connection server daemon (10.0.0.1:38430). Jul 11 00:24:47.571545 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 38430 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:47.573596 sshd-session[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:47.578513 systemd-logind[1534]: New session 11 of user core. Jul 11 00:24:47.588242 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:24:47.720310 sshd[3926]: Connection closed by 10.0.0.1 port 38430 Jul 11 00:24:47.720684 sshd-session[3917]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:47.725817 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:38430.service: Deactivated successfully. Jul 11 00:24:47.728158 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:24:47.729276 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:24:47.731256 systemd-logind[1534]: Removed session 11. Jul 11 00:24:48.764826 containerd[1555]: time="2025-07-11T00:24:48.764754856Z" level=info msg="connecting to shim 0bbe7ba3b2d96c4517480fac38ef77da694a4c06c07f17e9e1d14d85ca91e085" address="unix:///run/containerd/s/8817256ab30e8586c015f05a14cda7498706af8da8c1565268dd412e25d44500" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:48.809653 containerd[1555]: time="2025-07-11T00:24:48.809574432Z" level=info msg="connecting to shim 12e097b909dafaec8571ad62527543b1f5e023921b7791f87499e999b7c8134a" address="unix:///run/containerd/s/c5d45f3297c8e7d1f42c5956aaa4b9399fccfddf05b3ef68f035114c87ba77a6" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:48.833280 systemd[1]: Started cri-containerd-0bbe7ba3b2d96c4517480fac38ef77da694a4c06c07f17e9e1d14d85ca91e085.scope - libcontainer container 0bbe7ba3b2d96c4517480fac38ef77da694a4c06c07f17e9e1d14d85ca91e085. Jul 11 00:24:48.836862 systemd[1]: Started cri-containerd-12e097b909dafaec8571ad62527543b1f5e023921b7791f87499e999b7c8134a.scope - libcontainer container 12e097b909dafaec8571ad62527543b1f5e023921b7791f87499e999b7c8134a. Jul 11 00:24:48.848035 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:48.852027 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:48.949647 containerd[1555]: time="2025-07-11T00:24:48.949574767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7z6n,Uid:d5f33022-2bdd-47c4-8532-b403e086f587,Namespace:kube-system,Attempt:0,} returns sandbox id \"12e097b909dafaec8571ad62527543b1f5e023921b7791f87499e999b7c8134a\"" Jul 11 00:24:48.950599 kubelet[2742]: E0711 00:24:48.950566 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:48.952467 containerd[1555]: time="2025-07-11T00:24:48.952425252Z" level=info msg="CreateContainer within sandbox \"12e097b909dafaec8571ad62527543b1f5e023921b7791f87499e999b7c8134a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:24:49.007402 containerd[1555]: time="2025-07-11T00:24:49.007351234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ntn8w,Uid:f38c26a7-82e7-4119-a852-61d96183e813,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bbe7ba3b2d96c4517480fac38ef77da694a4c06c07f17e9e1d14d85ca91e085\"" Jul 11 00:24:49.008228 kubelet[2742]: E0711 00:24:49.008196 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:49.009961 containerd[1555]: time="2025-07-11T00:24:49.009912409Z" level=info msg="CreateContainer within sandbox \"0bbe7ba3b2d96c4517480fac38ef77da694a4c06c07f17e9e1d14d85ca91e085\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:24:49.251307 containerd[1555]: time="2025-07-11T00:24:49.251231365Z" level=info msg="Container 7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:49.266785 containerd[1555]: time="2025-07-11T00:24:49.266740491Z" level=info msg="Container b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:49.416492 containerd[1555]: time="2025-07-11T00:24:49.416412609Z" level=info msg="CreateContainer within sandbox \"12e097b909dafaec8571ad62527543b1f5e023921b7791f87499e999b7c8134a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45\"" Jul 11 00:24:49.417249 containerd[1555]: time="2025-07-11T00:24:49.416995485Z" level=info msg="StartContainer for \"7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45\"" Jul 11 00:24:49.418279 containerd[1555]: time="2025-07-11T00:24:49.418246099Z" level=info msg="connecting to shim 7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45" address="unix:///run/containerd/s/c5d45f3297c8e7d1f42c5956aaa4b9399fccfddf05b3ef68f035114c87ba77a6" protocol=ttrpc version=3 Jul 11 00:24:49.432672 containerd[1555]: time="2025-07-11T00:24:49.432612339Z" level=info msg="CreateContainer within sandbox \"0bbe7ba3b2d96c4517480fac38ef77da694a4c06c07f17e9e1d14d85ca91e085\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d\"" Jul 11 00:24:49.433748 containerd[1555]: time="2025-07-11T00:24:49.433711070Z" level=info msg="StartContainer for \"b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d\"" Jul 11 00:24:49.436184 containerd[1555]: time="2025-07-11T00:24:49.436146883Z" level=info msg="connecting to shim b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d" address="unix:///run/containerd/s/8817256ab30e8586c015f05a14cda7498706af8da8c1565268dd412e25d44500" protocol=ttrpc version=3 Jul 11 00:24:49.439320 systemd[1]: Started cri-containerd-7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45.scope - libcontainer container 7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45. Jul 11 00:24:49.467216 systemd[1]: Started cri-containerd-b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d.scope - libcontainer container b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d. Jul 11 00:24:49.538929 containerd[1555]: time="2025-07-11T00:24:49.538716031Z" level=info msg="StartContainer for \"7d79c044bc0948a9034566a789f818de43834f6c2b52be1e1d4806a53c627f45\" returns successfully" Jul 11 00:24:49.549442 containerd[1555]: time="2025-07-11T00:24:49.549382827Z" level=info msg="StartContainer for \"b8e6f51e0bf3a41e43e2ca6226f55dcc2d9a6497087586a18cf711dbbeb8524d\" returns successfully" Jul 11 00:24:49.745320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218506060.mount: Deactivated successfully. Jul 11 00:24:50.175141 kubelet[2742]: E0711 00:24:50.175000 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:50.177260 kubelet[2742]: E0711 00:24:50.177236 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:50.574282 kubelet[2742]: I0711 00:24:50.574205 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c7z6n" podStartSLOduration=34.574182911 podStartE2EDuration="34.574182911s" podCreationTimestamp="2025-07-11 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:50.28469881 +0000 UTC m=+40.492464613" watchObservedRunningTime="2025-07-11 00:24:50.574182911 +0000 UTC m=+40.781948694" Jul 11 00:24:50.575050 kubelet[2742]: I0711 00:24:50.574974 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ntn8w" podStartSLOduration=34.574963847 podStartE2EDuration="34.574963847s" podCreationTimestamp="2025-07-11 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:50.573404439 +0000 UTC m=+40.781170232" watchObservedRunningTime="2025-07-11 00:24:50.574963847 +0000 UTC m=+40.782729630" Jul 11 00:24:51.179874 kubelet[2742]: E0711 00:24:51.179475 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:51.179874 kubelet[2742]: E0711 00:24:51.179718 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:52.182219 kubelet[2742]: E0711 00:24:52.182161 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:52.182780 kubelet[2742]: E0711 00:24:52.182267 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:52.740476 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:59730.service - OpenSSH per-connection server daemon (10.0.0.1:59730). Jul 11 00:24:52.812011 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 59730 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:52.813948 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:52.819428 systemd-logind[1534]: New session 12 of user core. Jul 11 00:24:52.827268 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:24:52.950257 sshd[4115]: Connection closed by 10.0.0.1 port 59730 Jul 11 00:24:52.950579 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:52.955032 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:59730.service: Deactivated successfully. Jul 11 00:24:52.957184 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:24:52.958146 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:24:52.959514 systemd-logind[1534]: Removed session 12. Jul 11 00:24:53.186744 kubelet[2742]: E0711 00:24:53.186603 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:57.973165 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:59738.service - OpenSSH per-connection server daemon (10.0.0.1:59738). Jul 11 00:24:58.020175 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 59738 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:58.021944 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:58.026717 systemd-logind[1534]: New session 13 of user core. Jul 11 00:24:58.036416 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:24:58.165435 sshd[4132]: Connection closed by 10.0.0.1 port 59738 Jul 11 00:24:58.165836 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:58.171697 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:59738.service: Deactivated successfully. Jul 11 00:24:58.174504 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:24:58.175830 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:24:58.177404 systemd-logind[1534]: Removed session 13. Jul 11 00:25:03.184852 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:43602.service - OpenSSH per-connection server daemon (10.0.0.1:43602). Jul 11 00:25:03.254491 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 43602 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:03.256701 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:03.262281 systemd-logind[1534]: New session 14 of user core. Jul 11 00:25:03.273330 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:25:03.398005 sshd[4148]: Connection closed by 10.0.0.1 port 43602 Jul 11 00:25:03.398438 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:03.413798 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:43602.service: Deactivated successfully. Jul 11 00:25:03.416448 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:25:03.417434 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:25:03.420518 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:43618.service - OpenSSH per-connection server daemon (10.0.0.1:43618). Jul 11 00:25:03.421383 systemd-logind[1534]: Removed session 14. Jul 11 00:25:03.484722 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 43618 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:03.486801 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:03.493030 systemd-logind[1534]: New session 15 of user core. Jul 11 00:25:03.504414 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:25:03.674845 sshd[4165]: Connection closed by 10.0.0.1 port 43618 Jul 11 00:25:03.675256 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:03.690722 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:43618.service: Deactivated successfully. Jul 11 00:25:03.695681 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:25:03.698061 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:25:03.703877 systemd-logind[1534]: Removed session 15. Jul 11 00:25:03.707452 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:43620.service - OpenSSH per-connection server daemon (10.0.0.1:43620). Jul 11 00:25:03.765655 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 43620 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:03.767616 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:03.773246 systemd-logind[1534]: New session 16 of user core. Jul 11 00:25:03.785408 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:25:03.937396 sshd[4178]: Connection closed by 10.0.0.1 port 43620 Jul 11 00:25:03.937790 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:03.942324 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:43620.service: Deactivated successfully. Jul 11 00:25:03.944992 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:25:03.947967 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:25:03.950259 systemd-logind[1534]: Removed session 16. Jul 11 00:25:08.963389 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:43630.service - OpenSSH per-connection server daemon (10.0.0.1:43630). Jul 11 00:25:09.011166 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 43630 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:09.013207 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:09.018164 systemd-logind[1534]: New session 17 of user core. Jul 11 00:25:09.039446 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:25:09.164401 sshd[4193]: Connection closed by 10.0.0.1 port 43630 Jul 11 00:25:09.164763 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:09.170508 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:43630.service: Deactivated successfully. Jul 11 00:25:09.173118 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:25:09.174028 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:25:09.175805 systemd-logind[1534]: Removed session 17. Jul 11 00:25:14.180312 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:56310.service - OpenSSH per-connection server daemon (10.0.0.1:56310). Jul 11 00:25:14.245909 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 56310 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:14.247836 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:14.254100 systemd-logind[1534]: New session 18 of user core. Jul 11 00:25:14.261361 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:25:14.377901 sshd[4210]: Connection closed by 10.0.0.1 port 56310 Jul 11 00:25:14.378269 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:14.383498 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:56310.service: Deactivated successfully. Jul 11 00:25:14.385544 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:25:14.386521 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:25:14.388037 systemd-logind[1534]: Removed session 18. Jul 11 00:25:19.391611 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:56314.service - OpenSSH per-connection server daemon (10.0.0.1:56314). Jul 11 00:25:19.448874 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 56314 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:19.451476 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:19.457037 systemd-logind[1534]: New session 19 of user core. Jul 11 00:25:19.468340 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:25:19.587712 sshd[4228]: Connection closed by 10.0.0.1 port 56314 Jul 11 00:25:19.588303 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:19.600851 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:56314.service: Deactivated successfully. Jul 11 00:25:19.602719 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:25:19.603720 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:25:19.606991 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:32986.service - OpenSSH per-connection server daemon (10.0.0.1:32986). Jul 11 00:25:19.607914 systemd-logind[1534]: Removed session 19. Jul 11 00:25:19.665305 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 32986 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:19.666937 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:19.671830 systemd-logind[1534]: New session 20 of user core. Jul 11 00:25:19.681265 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:25:20.026604 sshd[4243]: Connection closed by 10.0.0.1 port 32986 Jul 11 00:25:20.026955 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:20.054513 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:32986.service: Deactivated successfully. Jul 11 00:25:20.056899 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:25:20.057829 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:25:20.061418 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:33000.service - OpenSSH per-connection server daemon (10.0.0.1:33000). Jul 11 00:25:20.062279 systemd-logind[1534]: Removed session 20. Jul 11 00:25:20.129025 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 33000 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:20.131162 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:20.137180 systemd-logind[1534]: New session 21 of user core. Jul 11 00:25:20.145266 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:25:21.100743 sshd[4257]: Connection closed by 10.0.0.1 port 33000 Jul 11 00:25:21.102111 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:21.117537 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:33000.service: Deactivated successfully. Jul 11 00:25:21.120512 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:25:21.123470 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:25:21.133238 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:33016.service - OpenSSH per-connection server daemon (10.0.0.1:33016). Jul 11 00:25:21.142157 systemd-logind[1534]: Removed session 21. Jul 11 00:25:21.202928 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 33016 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:21.205973 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:21.214021 systemd-logind[1534]: New session 22 of user core. Jul 11 00:25:21.222269 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:25:21.635662 sshd[4278]: Connection closed by 10.0.0.1 port 33016 Jul 11 00:25:21.636075 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:21.647837 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:33016.service: Deactivated successfully. Jul 11 00:25:21.650351 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:25:21.651690 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:25:21.654863 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:33030.service - OpenSSH per-connection server daemon (10.0.0.1:33030). Jul 11 00:25:21.655650 systemd-logind[1534]: Removed session 22. Jul 11 00:25:21.712885 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 33030 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:21.714833 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:21.719876 systemd-logind[1534]: New session 23 of user core. Jul 11 00:25:21.729380 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:25:21.848659 sshd[4292]: Connection closed by 10.0.0.1 port 33030 Jul 11 00:25:21.849013 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:21.853376 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:33030.service: Deactivated successfully. Jul 11 00:25:21.855489 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:25:21.856283 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:25:21.857478 systemd-logind[1534]: Removed session 23. Jul 11 00:25:25.933537 kubelet[2742]: E0711 00:25:25.933471 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:26.865669 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:33032.service - OpenSSH per-connection server daemon (10.0.0.1:33032). Jul 11 00:25:26.935510 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 33032 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:26.937820 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:26.943831 systemd-logind[1534]: New session 24 of user core. Jul 11 00:25:26.955353 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:25:27.073945 sshd[4308]: Connection closed by 10.0.0.1 port 33032 Jul 11 00:25:27.074331 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:27.079002 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:33032.service: Deactivated successfully. Jul 11 00:25:27.081405 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:25:27.082589 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:25:27.084012 systemd-logind[1534]: Removed session 24. Jul 11 00:25:32.092182 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:44570.service - OpenSSH per-connection server daemon (10.0.0.1:44570). Jul 11 00:25:32.153929 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 44570 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:32.156182 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:32.161843 systemd-logind[1534]: New session 25 of user core. Jul 11 00:25:32.175422 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:25:32.292552 sshd[4325]: Connection closed by 10.0.0.1 port 44570 Jul 11 00:25:32.292918 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:32.298253 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:44570.service: Deactivated successfully. Jul 11 00:25:32.300524 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:25:32.301734 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:25:32.303268 systemd-logind[1534]: Removed session 25. Jul 11 00:25:37.310728 systemd[1]: Started sshd@25-10.0.0.83:22-10.0.0.1:44578.service - OpenSSH per-connection server daemon (10.0.0.1:44578). Jul 11 00:25:37.364956 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 44578 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:37.366825 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:37.371547 systemd-logind[1534]: New session 26 of user core. Jul 11 00:25:37.379245 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:25:37.488423 sshd[4340]: Connection closed by 10.0.0.1 port 44578 Jul 11 00:25:37.488697 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:37.492823 systemd[1]: sshd@25-10.0.0.83:22-10.0.0.1:44578.service: Deactivated successfully. Jul 11 00:25:37.494889 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:25:37.495774 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:25:37.497178 systemd-logind[1534]: Removed session 26. Jul 11 00:25:38.923800 kubelet[2742]: E0711 00:25:38.923730 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:38.924427 kubelet[2742]: E0711 00:25:38.923872 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:41.924521 kubelet[2742]: E0711 00:25:41.924457 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:42.503900 systemd[1]: Started sshd@26-10.0.0.83:22-10.0.0.1:47880.service - OpenSSH per-connection server daemon (10.0.0.1:47880). Jul 11 00:25:42.563102 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 47880 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:42.565002 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:42.570547 systemd-logind[1534]: New session 27 of user core. Jul 11 00:25:42.584368 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:25:42.728272 sshd[4355]: Connection closed by 10.0.0.1 port 47880 Jul 11 00:25:42.728812 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:42.743273 systemd[1]: sshd@26-10.0.0.83:22-10.0.0.1:47880.service: Deactivated successfully. Jul 11 00:25:42.746520 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:25:42.747766 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:25:42.753368 systemd[1]: Started sshd@27-10.0.0.83:22-10.0.0.1:47894.service - OpenSSH per-connection server daemon (10.0.0.1:47894). Jul 11 00:25:42.754057 systemd-logind[1534]: Removed session 27. Jul 11 00:25:42.811103 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 47894 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:42.813156 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:42.819287 systemd-logind[1534]: New session 28 of user core. Jul 11 00:25:42.826265 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 00:25:43.924337 kubelet[2742]: E0711 00:25:43.924209 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:45.092624 containerd[1555]: time="2025-07-11T00:25:45.092548043Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:25:45.094488 containerd[1555]: time="2025-07-11T00:25:45.094434318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" id:\"55971b17887da2299076e3e0d6c4651c3709df26940dde32a15f9f50fb0f85c6\" pid:4392 exited_at:{seconds:1752193545 nanos:94042949}" Jul 11 00:25:45.111185 containerd[1555]: time="2025-07-11T00:25:45.111130181Z" level=info msg="StopContainer for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" with timeout 2 (s)" Jul 11 00:25:45.118755 containerd[1555]: time="2025-07-11T00:25:45.118715606Z" level=info msg="Stop container \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" with signal terminated" Jul 11 00:25:45.132482 systemd-networkd[1474]: lxc_health: Link DOWN Jul 11 00:25:45.132492 systemd-networkd[1474]: lxc_health: Lost carrier Jul 11 00:25:45.156398 systemd[1]: cri-containerd-6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a.scope: Deactivated successfully. Jul 11 00:25:45.156856 containerd[1555]: time="2025-07-11T00:25:45.156672410Z" level=info msg="received exit event container_id:\"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" id:\"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" pid:3373 exited_at:{seconds:1752193545 nanos:156283014}" Jul 11 00:25:45.156854 systemd[1]: cri-containerd-6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a.scope: Consumed 7.588s CPU time, 123.3M memory peak, 216K read from disk, 13.3M written to disk. Jul 11 00:25:45.157026 containerd[1555]: time="2025-07-11T00:25:45.156906212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" id:\"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" pid:3373 exited_at:{seconds:1752193545 nanos:156283014}" Jul 11 00:25:45.180192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a-rootfs.mount: Deactivated successfully. Jul 11 00:25:45.226574 containerd[1555]: time="2025-07-11T00:25:45.226521324Z" level=info msg="StopContainer for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" with timeout 30 (s)" Jul 11 00:25:45.227195 containerd[1555]: time="2025-07-11T00:25:45.227138330Z" level=info msg="Stop container \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" with signal terminated" Jul 11 00:25:45.240803 systemd[1]: cri-containerd-932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e.scope: Deactivated successfully. Jul 11 00:25:45.242218 containerd[1555]: time="2025-07-11T00:25:45.242161002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" id:\"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" pid:3359 exited_at:{seconds:1752193545 nanos:241727251}" Jul 11 00:25:45.242218 containerd[1555]: time="2025-07-11T00:25:45.242180128Z" level=info msg="received exit event container_id:\"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" id:\"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" pid:3359 exited_at:{seconds:1752193545 nanos:241727251}" Jul 11 00:25:45.267250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e-rootfs.mount: Deactivated successfully. Jul 11 00:25:45.832154 containerd[1555]: time="2025-07-11T00:25:45.832055432Z" level=info msg="StopContainer for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" returns successfully" Jul 11 00:25:45.863873 containerd[1555]: time="2025-07-11T00:25:45.863811277Z" level=info msg="StopPodSandbox for \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\"" Jul 11 00:25:45.864061 containerd[1555]: time="2025-07-11T00:25:45.863903571Z" level=info msg="Container to stop \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:25:45.864061 containerd[1555]: time="2025-07-11T00:25:45.863918349Z" level=info msg="Container to stop \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:25:45.864061 containerd[1555]: time="2025-07-11T00:25:45.863927276Z" level=info msg="Container to stop \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:25:45.864061 containerd[1555]: time="2025-07-11T00:25:45.863936713Z" level=info msg="Container to stop \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:25:45.864061 containerd[1555]: time="2025-07-11T00:25:45.863945371Z" level=info msg="Container to stop \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:25:45.870985 systemd[1]: cri-containerd-f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57.scope: Deactivated successfully. Jul 11 00:25:45.872903 containerd[1555]: time="2025-07-11T00:25:45.872850630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" id:\"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" pid:2860 exit_status:137 exited_at:{seconds:1752193545 nanos:871863104}" Jul 11 00:25:45.900282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57-rootfs.mount: Deactivated successfully. Jul 11 00:25:45.999754 containerd[1555]: time="2025-07-11T00:25:45.999685902Z" level=info msg="StopContainer for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" returns successfully" Jul 11 00:25:46.000313 containerd[1555]: time="2025-07-11T00:25:46.000284021Z" level=info msg="StopPodSandbox for \"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\"" Jul 11 00:25:46.000369 containerd[1555]: time="2025-07-11T00:25:46.000343524Z" level=info msg="Container to stop \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:25:46.008616 systemd[1]: cri-containerd-6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1.scope: Deactivated successfully. Jul 11 00:25:46.036071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1-rootfs.mount: Deactivated successfully. Jul 11 00:25:46.169268 containerd[1555]: time="2025-07-11T00:25:46.168932050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" id:\"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" pid:2939 exit_status:137 exited_at:{seconds:1752193546 nanos:13312822}" Jul 11 00:25:46.171726 containerd[1555]: time="2025-07-11T00:25:46.170182173Z" level=info msg="shim disconnected" id=f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57 namespace=k8s.io Jul 11 00:25:46.171726 containerd[1555]: time="2025-07-11T00:25:46.170221617Z" level=warning msg="cleaning up after shim disconnected" id=f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57 namespace=k8s.io Jul 11 00:25:46.173694 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57-shm.mount: Deactivated successfully. Jul 11 00:25:46.190310 containerd[1555]: time="2025-07-11T00:25:46.170229161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:25:46.190434 containerd[1555]: time="2025-07-11T00:25:46.181276937Z" level=info msg="received exit event sandbox_id:\"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" exit_status:137 exited_at:{seconds:1752193545 nanos:871863104}" Jul 11 00:25:46.198730 containerd[1555]: time="2025-07-11T00:25:46.198657409Z" level=info msg="TearDown network for sandbox \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" successfully" Jul 11 00:25:46.198730 containerd[1555]: time="2025-07-11T00:25:46.198718575Z" level=info msg="StopPodSandbox for \"f7dc9ded8e7f2225a53921e1be06a20b95a35c33bc5fdd5026af5d9a6a505c57\" returns successfully" Jul 11 00:25:46.216911 containerd[1555]: time="2025-07-11T00:25:46.216359880Z" level=info msg="shim disconnected" id=6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1 namespace=k8s.io Jul 11 00:25:46.216911 containerd[1555]: time="2025-07-11T00:25:46.216405547Z" level=warning msg="cleaning up after shim disconnected" id=6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1 namespace=k8s.io Jul 11 00:25:46.216911 containerd[1555]: time="2025-07-11T00:25:46.216417639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:25:46.216911 containerd[1555]: time="2025-07-11T00:25:46.216629179Z" level=info msg="received exit event sandbox_id:\"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" exit_status:137 exited_at:{seconds:1752193546 nanos:13312822}" Jul 11 00:25:46.220812 containerd[1555]: time="2025-07-11T00:25:46.220613227Z" level=info msg="TearDown network for sandbox \"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" successfully" Jul 11 00:25:46.220812 containerd[1555]: time="2025-07-11T00:25:46.220669263Z" level=info msg="StopPodSandbox for \"6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1\" returns successfully" Jul 11 00:25:46.221243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b279eee41574fc60b21d6dc29390ab99663392ed28ce41dd88abea8546056f1-shm.mount: Deactivated successfully. Jul 11 00:25:46.307759 kubelet[2742]: I0711 00:25:46.307716 2742 scope.go:117] "RemoveContainer" containerID="6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a" Jul 11 00:25:46.311347 containerd[1555]: time="2025-07-11T00:25:46.311295871Z" level=info msg="RemoveContainer for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\"" Jul 11 00:25:46.319445 containerd[1555]: time="2025-07-11T00:25:46.319380549Z" level=info msg="RemoveContainer for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" returns successfully" Jul 11 00:25:46.319721 kubelet[2742]: I0711 00:25:46.319681 2742 scope.go:117] "RemoveContainer" containerID="a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de" Jul 11 00:25:46.321412 containerd[1555]: time="2025-07-11T00:25:46.321369717Z" level=info msg="RemoveContainer for \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\"" Jul 11 00:25:46.360988 kubelet[2742]: I0711 00:25:46.360834 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-etc-cni-netd\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.360988 kubelet[2742]: I0711 00:25:46.360909 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-hubble-tls\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.360988 kubelet[2742]: I0711 00:25:46.360934 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqmhq\" (UniqueName: \"kubernetes.io/projected/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-kube-api-access-xqmhq\") pod \"1e7f3743-8a45-4afb-a758-1ec2ac4384ab\" (UID: \"1e7f3743-8a45-4afb-a758-1ec2ac4384ab\") " Jul 11 00:25:46.360988 kubelet[2742]: I0711 00:25:46.360960 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-config-path\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.360988 kubelet[2742]: I0711 00:25:46.360956 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.360988 kubelet[2742]: I0711 00:25:46.360985 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c8991f2-517d-42a1-b3ce-0350097a6c28-clustermesh-secrets\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.361360 kubelet[2742]: I0711 00:25:46.361062 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-run\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.361360 kubelet[2742]: I0711 00:25:46.361118 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-hostproc\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.361360 kubelet[2742]: I0711 00:25:46.361132 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-bpf-maps\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.361360 kubelet[2742]: I0711 00:25:46.361164 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-cgroup\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.361360 kubelet[2742]: I0711 00:25:46.361179 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-kernel\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.361360 kubelet[2742]: I0711 00:25:46.361196 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cni-path\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.362074 kubelet[2742]: I0711 00:25:46.361215 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kztbg\" (UniqueName: \"kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-kube-api-access-kztbg\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.362074 kubelet[2742]: I0711 00:25:46.361228 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-xtables-lock\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.362074 kubelet[2742]: I0711 00:25:46.361241 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-lib-modules\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.362074 kubelet[2742]: I0711 00:25:46.361258 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-cilium-config-path\") pod \"1e7f3743-8a45-4afb-a758-1ec2ac4384ab\" (UID: \"1e7f3743-8a45-4afb-a758-1ec2ac4384ab\") " Jul 11 00:25:46.362074 kubelet[2742]: I0711 00:25:46.361278 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-net\") pod \"6c8991f2-517d-42a1-b3ce-0350097a6c28\" (UID: \"6c8991f2-517d-42a1-b3ce-0350097a6c28\") " Jul 11 00:25:46.362074 kubelet[2742]: I0711 00:25:46.361328 2742 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.362254 kubelet[2742]: I0711 00:25:46.361350 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362254 kubelet[2742]: I0711 00:25:46.361367 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362254 kubelet[2742]: I0711 00:25:46.361382 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362254 kubelet[2742]: I0711 00:25:46.361398 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362254 kubelet[2742]: I0711 00:25:46.361411 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362376 kubelet[2742]: I0711 00:25:46.361427 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362376 kubelet[2742]: I0711 00:25:46.361443 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362376 kubelet[2742]: I0711 00:25:46.361681 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.362376 kubelet[2742]: I0711 00:25:46.361723 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:25:46.365399 kubelet[2742]: I0711 00:25:46.365268 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c8991f2-517d-42a1-b3ce-0350097a6c28-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:25:46.365891 kubelet[2742]: I0711 00:25:46.365838 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-kube-api-access-xqmhq" (OuterVolumeSpecName: "kube-api-access-xqmhq") pod "1e7f3743-8a45-4afb-a758-1ec2ac4384ab" (UID: "1e7f3743-8a45-4afb-a758-1ec2ac4384ab"). InnerVolumeSpecName "kube-api-access-xqmhq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:25:46.366799 kubelet[2742]: I0711 00:25:46.366760 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:25:46.367054 systemd[1]: var-lib-kubelet-pods-1e7f3743\x2d8a45\x2d4afb\x2da758\x2d1ec2ac4384ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxqmhq.mount: Deactivated successfully. Jul 11 00:25:46.367241 systemd[1]: var-lib-kubelet-pods-6c8991f2\x2d517d\x2d42a1\x2db3ce\x2d0350097a6c28-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:25:46.367319 systemd[1]: var-lib-kubelet-pods-6c8991f2\x2d517d\x2d42a1\x2db3ce\x2d0350097a6c28-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:25:46.369584 kubelet[2742]: I0711 00:25:46.369499 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e7f3743-8a45-4afb-a758-1ec2ac4384ab" (UID: "1e7f3743-8a45-4afb-a758-1ec2ac4384ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:25:46.370273 kubelet[2742]: I0711 00:25:46.370235 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:25:46.371715 kubelet[2742]: I0711 00:25:46.371664 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-kube-api-access-kztbg" (OuterVolumeSpecName: "kube-api-access-kztbg") pod "6c8991f2-517d-42a1-b3ce-0350097a6c28" (UID: "6c8991f2-517d-42a1-b3ce-0350097a6c28"). InnerVolumeSpecName "kube-api-access-kztbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:25:46.373617 systemd[1]: var-lib-kubelet-pods-6c8991f2\x2d517d\x2d42a1\x2db3ce\x2d0350097a6c28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkztbg.mount: Deactivated successfully. Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462431 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqmhq\" (UniqueName: \"kubernetes.io/projected/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-kube-api-access-xqmhq\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462483 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462500 2742 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c8991f2-517d-42a1-b3ce-0350097a6c28-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462512 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462533 2742 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462548 2742 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462563 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.462612 kubelet[2742]: I0711 00:25:46.462575 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462585 2742 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462596 2742 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462608 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e7f3743-8a45-4afb-a758-1ec2ac4384ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462620 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462631 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kztbg\" (UniqueName: \"kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-kube-api-access-kztbg\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462643 2742 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c8991f2-517d-42a1-b3ce-0350097a6c28-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.463005 kubelet[2742]: I0711 00:25:46.462654 2742 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c8991f2-517d-42a1-b3ce-0350097a6c28-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:25:46.500298 containerd[1555]: time="2025-07-11T00:25:46.500243498Z" level=info msg="RemoveContainer for \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" returns successfully" Jul 11 00:25:46.500644 kubelet[2742]: I0711 00:25:46.500569 2742 scope.go:117] "RemoveContainer" containerID="6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926" Jul 11 00:25:46.503291 containerd[1555]: time="2025-07-11T00:25:46.503249307Z" level=info msg="RemoveContainer for \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\"" Jul 11 00:25:46.616200 systemd[1]: Removed slice kubepods-burstable-pod6c8991f2_517d_42a1_b3ce_0350097a6c28.slice - libcontainer container kubepods-burstable-pod6c8991f2_517d_42a1_b3ce_0350097a6c28.slice. Jul 11 00:25:46.616343 systemd[1]: kubepods-burstable-pod6c8991f2_517d_42a1_b3ce_0350097a6c28.slice: Consumed 7.720s CPU time, 123.6M memory peak, 216K read from disk, 13.3M written to disk. Jul 11 00:25:46.617452 systemd[1]: Removed slice kubepods-besteffort-pod1e7f3743_8a45_4afb_a758_1ec2ac4384ab.slice - libcontainer container kubepods-besteffort-pod1e7f3743_8a45_4afb_a758_1ec2ac4384ab.slice. Jul 11 00:25:46.700791 containerd[1555]: time="2025-07-11T00:25:46.700724948Z" level=info msg="RemoveContainer for \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" returns successfully" Jul 11 00:25:46.701070 kubelet[2742]: I0711 00:25:46.701013 2742 scope.go:117] "RemoveContainer" containerID="16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262" Jul 11 00:25:46.702838 containerd[1555]: time="2025-07-11T00:25:46.702796372Z" level=info msg="RemoveContainer for \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\"" Jul 11 00:25:46.740548 containerd[1555]: time="2025-07-11T00:25:46.740361877Z" level=info msg="RemoveContainer for \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" returns successfully" Jul 11 00:25:46.740690 kubelet[2742]: I0711 00:25:46.740646 2742 scope.go:117] "RemoveContainer" containerID="432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94" Jul 11 00:25:46.744556 containerd[1555]: time="2025-07-11T00:25:46.744500647Z" level=info msg="RemoveContainer for \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\"" Jul 11 00:25:46.749839 containerd[1555]: time="2025-07-11T00:25:46.749781396Z" level=info msg="RemoveContainer for \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" returns successfully" Jul 11 00:25:46.750126 kubelet[2742]: I0711 00:25:46.750099 2742 scope.go:117] "RemoveContainer" containerID="6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a" Jul 11 00:25:46.750536 containerd[1555]: time="2025-07-11T00:25:46.750464016Z" level=error msg="ContainerStatus for \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\": not found" Jul 11 00:25:46.754157 sshd[4371]: Connection closed by 10.0.0.1 port 47894 Jul 11 00:25:46.755785 kubelet[2742]: E0711 00:25:46.755148 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\": not found" containerID="6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a" Jul 11 00:25:46.755785 kubelet[2742]: I0711 00:25:46.755244 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a"} err="failed to get container status \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e7d45ca1d40e5e6f54f5e55017883d7f6e7f01d91035bfdb6b1da4587b4d32a\": not found" Jul 11 00:25:46.755785 kubelet[2742]: I0711 00:25:46.755410 2742 scope.go:117] "RemoveContainer" containerID="a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de" Jul 11 00:25:46.756331 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:46.762151 containerd[1555]: time="2025-07-11T00:25:46.756166751Z" level=error msg="ContainerStatus for \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\": not found" Jul 11 00:25:46.762432 kubelet[2742]: E0711 00:25:46.762380 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\": not found" containerID="a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de" Jul 11 00:25:46.762515 kubelet[2742]: I0711 00:25:46.762434 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de"} err="failed to get container status \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\": rpc error: code = NotFound desc = an error occurred when try to find container \"a21ab2f222d909e2a79776bd293b1535e59f21e14d6bb31e37bc6256f17ac3de\": not found" Jul 11 00:25:46.762515 kubelet[2742]: I0711 00:25:46.762468 2742 scope.go:117] "RemoveContainer" containerID="6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926" Jul 11 00:25:46.762996 containerd[1555]: time="2025-07-11T00:25:46.762894794Z" level=error msg="ContainerStatus for \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\": not found" Jul 11 00:25:46.763156 kubelet[2742]: E0711 00:25:46.763124 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\": not found" containerID="6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926" Jul 11 00:25:46.763217 kubelet[2742]: I0711 00:25:46.763197 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926"} err="failed to get container status \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\": rpc error: code = NotFound desc = an error occurred when try to find container \"6900540308c7f2bc8a262e07f85a0a6bab77e056bcc71bb83065fe83d16d3926\": not found" Jul 11 00:25:46.763260 kubelet[2742]: I0711 00:25:46.763215 2742 scope.go:117] "RemoveContainer" containerID="16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262" Jul 11 00:25:46.763473 containerd[1555]: time="2025-07-11T00:25:46.763386914Z" level=error msg="ContainerStatus for \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\": not found" Jul 11 00:25:46.763569 kubelet[2742]: E0711 00:25:46.763506 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\": not found" containerID="16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262" Jul 11 00:25:46.763569 kubelet[2742]: I0711 00:25:46.763530 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262"} err="failed to get container status \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\": rpc error: code = NotFound desc = an error occurred when try to find container \"16e3f20100d173e250cc0105505e43d9e8556c6938f9f3d36d8a9469eb16b262\": not found" Jul 11 00:25:46.763569 kubelet[2742]: I0711 00:25:46.763550 2742 scope.go:117] "RemoveContainer" containerID="432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94" Jul 11 00:25:46.764043 systemd[1]: sshd@27-10.0.0.83:22-10.0.0.1:47894.service: Deactivated successfully. Jul 11 00:25:46.764255 containerd[1555]: time="2025-07-11T00:25:46.763888272Z" level=error msg="ContainerStatus for \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\": not found" Jul 11 00:25:46.764485 kubelet[2742]: E0711 00:25:46.764232 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\": not found" containerID="432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94" Jul 11 00:25:46.764485 kubelet[2742]: I0711 00:25:46.764259 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94"} err="failed to get container status \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\": rpc error: code = NotFound desc = an error occurred when try to find container \"432fd769f530c003c5950e691a02d5487a70a439569765f4c92e368f0c6c6b94\": not found" Jul 11 00:25:46.764485 kubelet[2742]: I0711 00:25:46.764280 2742 scope.go:117] "RemoveContainer" containerID="932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e" Jul 11 00:25:46.766509 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 00:25:46.766805 containerd[1555]: time="2025-07-11T00:25:46.766729431Z" level=info msg="RemoveContainer for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\"" Jul 11 00:25:46.770701 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Jul 11 00:25:46.772993 systemd[1]: Started sshd@28-10.0.0.83:22-10.0.0.1:47902.service - OpenSSH per-connection server daemon (10.0.0.1:47902). Jul 11 00:25:46.774220 kubelet[2742]: I0711 00:25:46.773574 2742 scope.go:117] "RemoveContainer" containerID="932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e" Jul 11 00:25:46.774220 kubelet[2742]: E0711 00:25:46.774150 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\": not found" containerID="932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e" Jul 11 00:25:46.774220 kubelet[2742]: I0711 00:25:46.774184 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e"} err="failed to get container status \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\": not found" Jul 11 00:25:46.774330 containerd[1555]: time="2025-07-11T00:25:46.773272033Z" level=info msg="RemoveContainer for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" returns successfully" Jul 11 00:25:46.774330 containerd[1555]: time="2025-07-11T00:25:46.773999909Z" level=error msg="ContainerStatus for \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"932f7f3cac7f70b284f8fd0e2b0123c7d53617e83cedb9e4a099219c3f3cfd8e\": not found" Jul 11 00:25:46.774769 systemd-logind[1534]: Removed session 28. Jul 11 00:25:46.840424 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 47902 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:46.842155 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:46.847516 systemd-logind[1534]: New session 29 of user core. Jul 11 00:25:46.865271 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 00:25:47.499096 sshd[4523]: Connection closed by 10.0.0.1 port 47902 Jul 11 00:25:47.495725 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:47.513818 systemd[1]: sshd@28-10.0.0.83:22-10.0.0.1:47902.service: Deactivated successfully. Jul 11 00:25:47.519554 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 00:25:47.523312 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Jul 11 00:25:47.524199 kubelet[2742]: I0711 00:25:47.524126 2742 memory_manager.go:355] "RemoveStaleState removing state" podUID="1e7f3743-8a45-4afb-a758-1ec2ac4384ab" containerName="cilium-operator" Jul 11 00:25:47.524199 kubelet[2742]: I0711 00:25:47.524155 2742 memory_manager.go:355] "RemoveStaleState removing state" podUID="6c8991f2-517d-42a1-b3ce-0350097a6c28" containerName="cilium-agent" Jul 11 00:25:47.530548 systemd[1]: Started sshd@29-10.0.0.83:22-10.0.0.1:47916.service - OpenSSH per-connection server daemon (10.0.0.1:47916). Jul 11 00:25:47.537148 systemd-logind[1534]: Removed session 29. Jul 11 00:25:47.554476 systemd[1]: Created slice kubepods-burstable-pod4775949d_5be1_41de_b34a_0a1d5ac1d244.slice - libcontainer container kubepods-burstable-pod4775949d_5be1_41de_b34a_0a1d5ac1d244.slice. Jul 11 00:25:47.571108 kubelet[2742]: I0711 00:25:47.569529 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4775949d-5be1-41de-b34a-0a1d5ac1d244-cilium-ipsec-secrets\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571108 kubelet[2742]: I0711 00:25:47.569587 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-cilium-cgroup\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571108 kubelet[2742]: I0711 00:25:47.569621 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4775949d-5be1-41de-b34a-0a1d5ac1d244-cilium-config-path\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571108 kubelet[2742]: I0711 00:25:47.569643 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknrc\" (UniqueName: \"kubernetes.io/projected/4775949d-5be1-41de-b34a-0a1d5ac1d244-kube-api-access-hknrc\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571108 kubelet[2742]: I0711 00:25:47.569668 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-cni-path\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571108 kubelet[2742]: I0711 00:25:47.569689 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4775949d-5be1-41de-b34a-0a1d5ac1d244-hubble-tls\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571435 kubelet[2742]: I0711 00:25:47.569717 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-lib-modules\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571435 kubelet[2742]: I0711 00:25:47.569742 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-hostproc\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571435 kubelet[2742]: I0711 00:25:47.569761 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-etc-cni-netd\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571435 kubelet[2742]: I0711 00:25:47.569781 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-xtables-lock\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571435 kubelet[2742]: I0711 00:25:47.569804 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-host-proc-sys-kernel\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571435 kubelet[2742]: I0711 00:25:47.569823 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-cilium-run\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571614 kubelet[2742]: I0711 00:25:47.569849 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-host-proc-sys-net\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571614 kubelet[2742]: I0711 00:25:47.569871 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4775949d-5be1-41de-b34a-0a1d5ac1d244-bpf-maps\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.571614 kubelet[2742]: I0711 00:25:47.569891 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4775949d-5be1-41de-b34a-0a1d5ac1d244-clustermesh-secrets\") pod \"cilium-zphjs\" (UID: \"4775949d-5be1-41de-b34a-0a1d5ac1d244\") " pod="kube-system/cilium-zphjs" Jul 11 00:25:47.633450 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 47916 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:47.635384 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:47.640696 systemd-logind[1534]: New session 30 of user core. Jul 11 00:25:47.647385 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 00:25:47.703461 sshd[4537]: Connection closed by 10.0.0.1 port 47916 Jul 11 00:25:47.703911 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:47.725765 systemd[1]: sshd@29-10.0.0.83:22-10.0.0.1:47916.service: Deactivated successfully. Jul 11 00:25:47.728384 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 00:25:47.729377 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Jul 11 00:25:47.733927 systemd[1]: Started sshd@30-10.0.0.83:22-10.0.0.1:47932.service - OpenSSH per-connection server daemon (10.0.0.1:47932). Jul 11 00:25:47.734740 systemd-logind[1534]: Removed session 30. Jul 11 00:25:47.792986 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 47932 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:47.794883 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:47.799955 systemd-logind[1534]: New session 31 of user core. Jul 11 00:25:47.809442 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 00:25:47.864060 kubelet[2742]: E0711 00:25:47.863988 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:47.866859 containerd[1555]: time="2025-07-11T00:25:47.866802666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zphjs,Uid:4775949d-5be1-41de-b34a-0a1d5ac1d244,Namespace:kube-system,Attempt:0,}" Jul 11 00:25:47.893161 containerd[1555]: time="2025-07-11T00:25:47.892609003Z" level=info msg="connecting to shim d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6" address="unix:///run/containerd/s/6cca2db5aaf6e432bff2eb437a537e51e87085c994f7aae8bf4341201f2584ce" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:25:47.925336 systemd[1]: Started cri-containerd-d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6.scope - libcontainer container d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6. Jul 11 00:25:47.929107 kubelet[2742]: I0711 00:25:47.928690 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7f3743-8a45-4afb-a758-1ec2ac4384ab" path="/var/lib/kubelet/pods/1e7f3743-8a45-4afb-a758-1ec2ac4384ab/volumes" Jul 11 00:25:47.930841 kubelet[2742]: I0711 00:25:47.930813 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c8991f2-517d-42a1-b3ce-0350097a6c28" path="/var/lib/kubelet/pods/6c8991f2-517d-42a1-b3ce-0350097a6c28/volumes" Jul 11 00:25:47.970515 containerd[1555]: time="2025-07-11T00:25:47.970457526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zphjs,Uid:4775949d-5be1-41de-b34a-0a1d5ac1d244,Namespace:kube-system,Attempt:0,} returns sandbox id \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\"" Jul 11 00:25:47.971323 kubelet[2742]: E0711 00:25:47.971260 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:47.975705 containerd[1555]: time="2025-07-11T00:25:47.975651249Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:25:48.093272 containerd[1555]: time="2025-07-11T00:25:48.093209125Z" level=info msg="Container 34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:25:48.107453 containerd[1555]: time="2025-07-11T00:25:48.107388472Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\"" Jul 11 00:25:48.108033 containerd[1555]: time="2025-07-11T00:25:48.107980660Z" level=info msg="StartContainer for \"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\"" Jul 11 00:25:48.109113 containerd[1555]: time="2025-07-11T00:25:48.109049359Z" level=info msg="connecting to shim 34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb" address="unix:///run/containerd/s/6cca2db5aaf6e432bff2eb437a537e51e87085c994f7aae8bf4341201f2584ce" protocol=ttrpc version=3 Jul 11 00:25:48.131269 systemd[1]: Started cri-containerd-34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb.scope - libcontainer container 34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb. Jul 11 00:25:48.178921 systemd[1]: cri-containerd-34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb.scope: Deactivated successfully. Jul 11 00:25:48.180750 containerd[1555]: time="2025-07-11T00:25:48.180690547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\" id:\"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\" pid:4615 exited_at:{seconds:1752193548 nanos:180132152}" Jul 11 00:25:48.343895 containerd[1555]: time="2025-07-11T00:25:48.343569894Z" level=info msg="received exit event container_id:\"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\" id:\"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\" pid:4615 exited_at:{seconds:1752193548 nanos:180132152}" Jul 11 00:25:48.345393 containerd[1555]: time="2025-07-11T00:25:48.345314179Z" level=info msg="StartContainer for \"34972b3fe2246430a870feb12d5e8688fc0d0f6e923bb6792ac82c26a4efcacb\" returns successfully" Jul 11 00:25:49.351493 kubelet[2742]: E0711 00:25:49.351444 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:49.354143 containerd[1555]: time="2025-07-11T00:25:49.354100259Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:25:49.749538 containerd[1555]: time="2025-07-11T00:25:49.747447431Z" level=info msg="Container 13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:25:49.785686 containerd[1555]: time="2025-07-11T00:25:49.785617128Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\"" Jul 11 00:25:49.786323 containerd[1555]: time="2025-07-11T00:25:49.786205169Z" level=info msg="StartContainer for \"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\"" Jul 11 00:25:49.787281 containerd[1555]: time="2025-07-11T00:25:49.787257667Z" level=info msg="connecting to shim 13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2" address="unix:///run/containerd/s/6cca2db5aaf6e432bff2eb437a537e51e87085c994f7aae8bf4341201f2584ce" protocol=ttrpc version=3 Jul 11 00:25:49.810293 systemd[1]: Started cri-containerd-13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2.scope - libcontainer container 13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2. Jul 11 00:25:49.852746 systemd[1]: cri-containerd-13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2.scope: Deactivated successfully. Jul 11 00:25:49.853373 containerd[1555]: time="2025-07-11T00:25:49.853335149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\" id:\"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\" pid:4662 exited_at:{seconds:1752193549 nanos:852881242}" Jul 11 00:25:50.020793 kubelet[2742]: E0711 00:25:50.020627 2742 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:25:50.043137 containerd[1555]: time="2025-07-11T00:25:50.043001865Z" level=info msg="received exit event container_id:\"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\" id:\"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\" pid:4662 exited_at:{seconds:1752193549 nanos:852881242}" Jul 11 00:25:50.044357 containerd[1555]: time="2025-07-11T00:25:50.044296450Z" level=info msg="StartContainer for \"13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2\" returns successfully" Jul 11 00:25:50.066128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13d2930e9db88aafbae8e3370a78e4d8e38bcb8bf63068317d256393208c12b2-rootfs.mount: Deactivated successfully. Jul 11 00:25:50.358155 kubelet[2742]: E0711 00:25:50.358112 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:50.373422 containerd[1555]: time="2025-07-11T00:25:50.373370743Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:25:50.541191 containerd[1555]: time="2025-07-11T00:25:50.540068796Z" level=info msg="Container 2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:25:50.608863 containerd[1555]: time="2025-07-11T00:25:50.608707209Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\"" Jul 11 00:25:50.609675 containerd[1555]: time="2025-07-11T00:25:50.609352718Z" level=info msg="StartContainer for \"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\"" Jul 11 00:25:50.611010 containerd[1555]: time="2025-07-11T00:25:50.610982366Z" level=info msg="connecting to shim 2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1" address="unix:///run/containerd/s/6cca2db5aaf6e432bff2eb437a537e51e87085c994f7aae8bf4341201f2584ce" protocol=ttrpc version=3 Jul 11 00:25:50.636338 systemd[1]: Started cri-containerd-2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1.scope - libcontainer container 2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1. Jul 11 00:25:50.685313 systemd[1]: cri-containerd-2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1.scope: Deactivated successfully. Jul 11 00:25:50.686594 containerd[1555]: time="2025-07-11T00:25:50.686523987Z" level=info msg="received exit event container_id:\"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\" id:\"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\" pid:4705 exited_at:{seconds:1752193550 nanos:686231254}" Jul 11 00:25:50.686727 containerd[1555]: time="2025-07-11T00:25:50.686655956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\" id:\"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\" pid:4705 exited_at:{seconds:1752193550 nanos:686231254}" Jul 11 00:25:50.687642 containerd[1555]: time="2025-07-11T00:25:50.687608696Z" level=info msg="StartContainer for \"2b7726b6863070e512a57a732afb40b5cda10bd45a80b435e3b3efed35461ff1\" returns successfully" Jul 11 00:25:51.362992 kubelet[2742]: E0711 00:25:51.362930 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:51.365193 containerd[1555]: time="2025-07-11T00:25:51.365121704Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:25:51.549830 containerd[1555]: time="2025-07-11T00:25:51.547664430Z" level=info msg="Container f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:25:51.561521 containerd[1555]: time="2025-07-11T00:25:51.561452218Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\"" Jul 11 00:25:51.562486 containerd[1555]: time="2025-07-11T00:25:51.562213195Z" level=info msg="StartContainer for \"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\"" Jul 11 00:25:51.563641 containerd[1555]: time="2025-07-11T00:25:51.563613770Z" level=info msg="connecting to shim f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16" address="unix:///run/containerd/s/6cca2db5aaf6e432bff2eb437a537e51e87085c994f7aae8bf4341201f2584ce" protocol=ttrpc version=3 Jul 11 00:25:51.586252 systemd[1]: Started cri-containerd-f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16.scope - libcontainer container f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16. Jul 11 00:25:51.618979 systemd[1]: cri-containerd-f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16.scope: Deactivated successfully. Jul 11 00:25:51.620387 containerd[1555]: time="2025-07-11T00:25:51.620354329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\" id:\"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\" pid:4746 exited_at:{seconds:1752193551 nanos:619334834}" Jul 11 00:25:51.681257 containerd[1555]: time="2025-07-11T00:25:51.681180785Z" level=info msg="received exit event container_id:\"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\" id:\"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\" pid:4746 exited_at:{seconds:1752193551 nanos:619334834}" Jul 11 00:25:51.690705 containerd[1555]: time="2025-07-11T00:25:51.690661208Z" level=info msg="StartContainer for \"f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16\" returns successfully" Jul 11 00:25:51.742904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f02f75af5acebcf5efaacdc94d376c7641c4caeb2b1b4aadfc5c6c98e8757c16-rootfs.mount: Deactivated successfully. Jul 11 00:25:52.162779 kubelet[2742]: I0711 00:25:52.162714 2742 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:25:52Z","lastTransitionTime":"2025-07-11T00:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:25:52.369314 kubelet[2742]: E0711 00:25:52.369254 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:52.382969 containerd[1555]: time="2025-07-11T00:25:52.382917996Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:25:52.394546 containerd[1555]: time="2025-07-11T00:25:52.394472643Z" level=info msg="Container 19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:25:52.404309 containerd[1555]: time="2025-07-11T00:25:52.404256507Z" level=info msg="CreateContainer within sandbox \"d422b6e990e26f93b695ba79c6c0bb6770d5edbb085ed2ccff19a8c6bbbae1b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\"" Jul 11 00:25:52.404854 containerd[1555]: time="2025-07-11T00:25:52.404810072Z" level=info msg="StartContainer for \"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\"" Jul 11 00:25:52.405876 containerd[1555]: time="2025-07-11T00:25:52.405846570Z" level=info msg="connecting to shim 19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a" address="unix:///run/containerd/s/6cca2db5aaf6e432bff2eb437a537e51e87085c994f7aae8bf4341201f2584ce" protocol=ttrpc version=3 Jul 11 00:25:52.430294 systemd[1]: Started cri-containerd-19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a.scope - libcontainer container 19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a. Jul 11 00:25:52.480228 containerd[1555]: time="2025-07-11T00:25:52.480164176Z" level=info msg="StartContainer for \"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" returns successfully" Jul 11 00:25:52.563441 containerd[1555]: time="2025-07-11T00:25:52.563292770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" id:\"e8737e3eff3e792807299ebe687d890ecba6eb1d0661f5b2573c0af3b235bb56\" pid:4814 exited_at:{seconds:1752193552 nanos:562877126}" Jul 11 00:25:52.973150 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 11 00:25:53.375298 kubelet[2742]: E0711 00:25:53.375263 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:53.565426 kubelet[2742]: I0711 00:25:53.565347 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zphjs" podStartSLOduration=6.565326907 podStartE2EDuration="6.565326907s" podCreationTimestamp="2025-07-11 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:25:53.56490537 +0000 UTC m=+103.772671143" watchObservedRunningTime="2025-07-11 00:25:53.565326907 +0000 UTC m=+103.773092690" Jul 11 00:25:54.377549 kubelet[2742]: E0711 00:25:54.377485 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.799649 containerd[1555]: time="2025-07-11T00:25:54.799519024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" id:\"17a23af0dccba12c6d0bd51cf6bcd7650a225fb510508a2f8732de833a85128c\" pid:4891 exit_status:1 exited_at:{seconds:1752193554 nanos:799138695}" Jul 11 00:25:55.379806 kubelet[2742]: E0711 00:25:55.379768 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:56.883853 systemd-networkd[1474]: lxc_health: Link UP Jul 11 00:25:56.884791 systemd-networkd[1474]: lxc_health: Gained carrier Jul 11 00:25:57.103388 containerd[1555]: time="2025-07-11T00:25:57.103335742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" id:\"6dc4c958823b635a4ae9e9b5056ce57b652e5a06dbaffaf33a08ee5b521c16d8\" pid:5339 exited_at:{seconds:1752193557 nanos:102809718}" Jul 11 00:25:57.866978 kubelet[2742]: E0711 00:25:57.866911 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:58.386900 kubelet[2742]: E0711 00:25:58.386844 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:58.847374 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jul 11 00:25:59.283338 containerd[1555]: time="2025-07-11T00:25:59.283021928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" id:\"c76ea5371c7f44cc0c234712bc9a659cda03229316d6dbbf88961e513ec8c345\" pid:5377 exited_at:{seconds:1752193559 nanos:282411647}" Jul 11 00:25:59.389178 kubelet[2742]: E0711 00:25:59.389124 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:01.392379 containerd[1555]: time="2025-07-11T00:26:01.392289181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" id:\"207dd1727718e5d84d05339306d4b5a44530154602c9d51f00f0b87f6fa7731d\" pid:5408 exited_at:{seconds:1752193561 nanos:391776484}" Jul 11 00:26:03.498777 containerd[1555]: time="2025-07-11T00:26:03.498698924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bf6f42f693c2a7f9888ba6a579a19380e62039c9821bb2705e08be1a9d306a\" id:\"f96fd267de4a2408fa386fae9e19462bd48f2a57ec38426f643b794a2d46c5c1\" pid:5431 exited_at:{seconds:1752193563 nanos:498357850}" Jul 11 00:26:03.508967 sshd[4550]: Connection closed by 10.0.0.1 port 47932 Jul 11 00:26:03.509327 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:03.514527 systemd[1]: sshd@30-10.0.0.83:22-10.0.0.1:47932.service: Deactivated successfully. Jul 11 00:26:03.516882 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 00:26:03.517927 systemd-logind[1534]: Session 31 logged out. Waiting for processes to exit. Jul 11 00:26:03.519849 systemd-logind[1534]: Removed session 31.