Jul 7 00:09:39.961631 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:09:39.961657 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:09:39.961665 kernel: BIOS-provided physical RAM map: Jul 7 00:09:39.961742 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 00:09:39.961748 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 00:09:39.961755 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 00:09:39.961762 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 7 00:09:39.961771 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 7 00:09:39.961778 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 00:09:39.961784 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 00:09:39.961790 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:09:39.961797 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 00:09:39.961803 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:09:39.961809 kernel: NX (Execute Disable) protection: active Jul 7 00:09:39.961819 kernel: APIC: Static calls initialized Jul 7 00:09:39.961826 kernel: SMBIOS 2.8 present. Jul 7 00:09:39.961833 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 7 00:09:39.961840 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:09:39.961847 kernel: Hypervisor detected: KVM Jul 7 00:09:39.961854 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:09:39.961861 kernel: kvm-clock: using sched offset of 3336529977 cycles Jul 7 00:09:39.961868 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:09:39.961875 kernel: tsc: Detected 2794.746 MHz processor Jul 7 00:09:39.961882 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:09:39.961892 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:09:39.961899 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 7 00:09:39.961906 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 00:09:39.961913 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:09:39.961920 kernel: Using GB pages for direct mapping Jul 7 00:09:39.961927 kernel: ACPI: Early table checksum verification disabled Jul 7 00:09:39.961934 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 7 00:09:39.961942 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.961951 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.961958 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.961965 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 7 00:09:39.961972 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.961979 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.961986 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.961993 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:39.962000 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 7 00:09:39.962012 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 7 00:09:39.962020 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 7 00:09:39.962027 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 7 00:09:39.962034 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 7 00:09:39.962041 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 7 00:09:39.962048 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 7 00:09:39.962057 kernel: No NUMA configuration found Jul 7 00:09:39.962064 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 7 00:09:39.962072 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 7 00:09:39.962089 kernel: Zone ranges: Jul 7 00:09:39.962098 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:09:39.962107 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 7 00:09:39.962116 kernel: Normal empty Jul 7 00:09:39.962126 kernel: Device empty Jul 7 00:09:39.962135 kernel: Movable zone start for each node Jul 7 00:09:39.962144 kernel: Early memory node ranges Jul 7 00:09:39.962155 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 00:09:39.962165 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 7 00:09:39.962174 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 7 00:09:39.962183 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:09:39.962192 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:09:39.962201 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 7 00:09:39.962210 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:09:39.962219 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:09:39.962228 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:09:39.962239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:09:39.962248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:09:39.962258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:09:39.962267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:09:39.962276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:09:39.962285 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:09:39.962294 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:09:39.962302 kernel: TSC deadline timer available Jul 7 00:09:39.962311 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:09:39.962323 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:09:39.962330 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:09:39.962337 kernel: CPU topo: Max. threads per core: 1 Jul 7 00:09:39.962344 kernel: CPU topo: Num. cores per package: 4 Jul 7 00:09:39.962351 kernel: CPU topo: Num. threads per package: 4 Jul 7 00:09:39.962358 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 00:09:39.962365 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:09:39.962373 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 00:09:39.962380 kernel: kvm-guest: setup PV sched yield Jul 7 00:09:39.962387 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 00:09:39.962396 kernel: Booting paravirtualized kernel on KVM Jul 7 00:09:39.962404 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:09:39.962411 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 00:09:39.962418 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 00:09:39.962426 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 00:09:39.962433 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 00:09:39.962440 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:09:39.962447 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:09:39.962455 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:09:39.962465 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:09:39.962472 kernel: random: crng init done Jul 7 00:09:39.962480 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:09:39.962487 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:09:39.962494 kernel: Fallback order for Node 0: 0 Jul 7 00:09:39.962501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 7 00:09:39.962509 kernel: Policy zone: DMA32 Jul 7 00:09:39.962516 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:09:39.962525 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 00:09:39.962532 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:09:39.962539 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:09:39.962547 kernel: Dynamic Preempt: voluntary Jul 7 00:09:39.962554 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:09:39.962562 kernel: rcu: RCU event tracing is enabled. Jul 7 00:09:39.962569 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 00:09:39.962580 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:09:39.962588 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:09:39.962595 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:09:39.962607 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:09:39.962620 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 00:09:39.962633 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:09:39.962644 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:09:39.963816 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:09:39.963824 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 00:09:39.963831 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:09:39.963849 kernel: Console: colour VGA+ 80x25 Jul 7 00:09:39.963856 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:09:39.963864 kernel: ACPI: Core revision 20240827 Jul 7 00:09:39.963871 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:09:39.963881 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:09:39.963888 kernel: x2apic enabled Jul 7 00:09:39.963896 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:09:39.963903 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 00:09:39.963911 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 00:09:39.963921 kernel: kvm-guest: setup PV IPIs Jul 7 00:09:39.963928 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:09:39.963936 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 00:09:39.963943 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 7 00:09:39.963951 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:09:39.963958 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:09:39.963966 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:09:39.963974 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:09:39.963981 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:09:39.963991 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:09:39.963998 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:09:39.964006 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:09:39.964013 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:09:39.964021 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:09:39.964028 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 00:09:39.964037 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 00:09:39.964044 kernel: x86/bugs: return thunk changed Jul 7 00:09:39.964054 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 00:09:39.964061 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:09:39.964069 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:09:39.964084 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:09:39.964092 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:09:39.964099 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:09:39.964109 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:09:39.964119 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:09:39.964128 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:09:39.964140 kernel: landlock: Up and running. Jul 7 00:09:39.964149 kernel: SELinux: Initializing. Jul 7 00:09:39.964159 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:09:39.964169 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:09:39.964178 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:09:39.964188 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:09:39.964197 kernel: ... version: 0 Jul 7 00:09:39.964207 kernel: ... bit width: 48 Jul 7 00:09:39.964216 kernel: ... generic registers: 6 Jul 7 00:09:39.964228 kernel: ... value mask: 0000ffffffffffff Jul 7 00:09:39.964237 kernel: ... max period: 00007fffffffffff Jul 7 00:09:39.964246 kernel: ... fixed-purpose events: 0 Jul 7 00:09:39.964256 kernel: ... event mask: 000000000000003f Jul 7 00:09:39.964265 kernel: signal: max sigframe size: 1776 Jul 7 00:09:39.964274 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:09:39.964284 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:09:39.964294 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:09:39.964303 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:09:39.964315 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:09:39.964325 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 00:09:39.964332 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 00:09:39.964340 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 7 00:09:39.964348 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 7 00:09:39.964355 kernel: devtmpfs: initialized Jul 7 00:09:39.964363 kernel: x86/mm: Memory block size: 128MB Jul 7 00:09:39.964370 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:09:39.964378 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 00:09:39.964388 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:09:39.964395 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:09:39.964403 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:09:39.964410 kernel: audit: type=2000 audit(1751846976.528:1): state=initialized audit_enabled=0 res=1 Jul 7 00:09:39.964418 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:09:39.964425 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:09:39.964433 kernel: cpuidle: using governor menu Jul 7 00:09:39.964440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:09:39.964448 kernel: dca service started, version 1.12.1 Jul 7 00:09:39.964458 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 00:09:39.964465 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 00:09:39.964473 kernel: PCI: Using configuration type 1 for base access Jul 7 00:09:39.964480 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:09:39.964488 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:09:39.964496 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:09:39.964503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:09:39.964511 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:09:39.964518 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:09:39.964528 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:09:39.964535 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:09:39.964543 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:09:39.964550 kernel: ACPI: Interpreter enabled Jul 7 00:09:39.964558 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:09:39.964565 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:09:39.964573 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:09:39.964580 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:09:39.964588 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:09:39.964597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:09:39.964824 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:09:39.964944 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:09:39.965060 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:09:39.965072 kernel: PCI host bridge to bus 0000:00 Jul 7 00:09:39.965228 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:09:39.965333 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:09:39.965464 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:09:39.965570 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 7 00:09:39.965696 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 00:09:39.965804 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 7 00:09:39.965948 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:09:39.966095 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:09:39.966247 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:09:39.966378 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 7 00:09:39.966492 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 7 00:09:39.966607 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 7 00:09:39.966740 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:09:39.966871 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:09:39.967003 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 7 00:09:39.967176 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 7 00:09:39.967294 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 7 00:09:39.967425 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:09:39.967540 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 7 00:09:39.967655 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 7 00:09:39.967797 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 7 00:09:39.967927 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:09:39.968048 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 7 00:09:39.968174 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 7 00:09:39.968286 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 7 00:09:39.969461 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 7 00:09:39.969592 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:09:39.969723 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:09:39.969847 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 00:09:39.969966 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 7 00:09:39.970101 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 7 00:09:39.970238 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 00:09:39.970352 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 00:09:39.970363 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:09:39.970371 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:09:39.970378 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:09:39.970389 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:09:39.970397 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:09:39.970404 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:09:39.970412 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:09:39.970419 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:09:39.970427 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:09:39.970434 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:09:39.970441 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:09:39.970449 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:09:39.970458 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:09:39.970466 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:09:39.970474 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:09:39.970481 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:09:39.970489 kernel: iommu: Default domain type: Translated Jul 7 00:09:39.970496 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:09:39.970504 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:09:39.970511 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:09:39.970519 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 00:09:39.970529 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 7 00:09:39.970641 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:09:39.970771 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:09:39.970884 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:09:39.970894 kernel: vgaarb: loaded Jul 7 00:09:39.970902 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:09:39.970909 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:09:39.970917 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:09:39.970928 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:09:39.970936 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:09:39.970944 kernel: pnp: PnP ACPI init Jul 7 00:09:39.971064 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 00:09:39.971074 kernel: pnp: PnP ACPI: found 6 devices Jul 7 00:09:39.971091 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:09:39.971098 kernel: NET: Registered PF_INET protocol family Jul 7 00:09:39.971107 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:09:39.971117 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:09:39.971126 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:09:39.971135 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:09:39.971145 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:09:39.971155 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:09:39.971164 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:09:39.971174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:09:39.971183 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:09:39.971193 kernel: NET: Registered PF_XDP protocol family Jul 7 00:09:39.971327 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:09:39.971433 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:09:39.971536 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:09:39.971638 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 7 00:09:39.971774 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 00:09:39.971880 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 7 00:09:39.971890 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:09:39.971898 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 00:09:39.971910 kernel: Initialise system trusted keyrings Jul 7 00:09:39.971918 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:09:39.971926 kernel: Key type asymmetric registered Jul 7 00:09:39.971934 kernel: Asymmetric key parser 'x509' registered Jul 7 00:09:39.971941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:09:39.971949 kernel: io scheduler mq-deadline registered Jul 7 00:09:39.971957 kernel: io scheduler kyber registered Jul 7 00:09:39.971964 kernel: io scheduler bfq registered Jul 7 00:09:39.971972 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:09:39.971982 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:09:39.971990 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:09:39.971997 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 00:09:39.972005 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:09:39.972013 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:09:39.972021 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:09:39.972028 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:09:39.972036 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:09:39.972263 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 00:09:39.972278 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:09:39.972385 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 00:09:39.972491 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T00:09:39 UTC (1751846979) Jul 7 00:09:39.972597 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 00:09:39.972608 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:09:39.972616 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:09:39.972623 kernel: Segment Routing with IPv6 Jul 7 00:09:39.972631 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:09:39.972642 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:09:39.972649 kernel: Key type dns_resolver registered Jul 7 00:09:39.972657 kernel: IPI shorthand broadcast: enabled Jul 7 00:09:39.972664 kernel: sched_clock: Marking stable (3272001974, 111790038)->(3401290083, -17498071) Jul 7 00:09:39.972684 kernel: registered taskstats version 1 Jul 7 00:09:39.972692 kernel: Loading compiled-in X.509 certificates Jul 7 00:09:39.972700 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:09:39.972708 kernel: Demotion targets for Node 0: null Jul 7 00:09:39.972715 kernel: Key type .fscrypt registered Jul 7 00:09:39.972725 kernel: Key type fscrypt-provisioning registered Jul 7 00:09:39.972733 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:09:39.972740 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:09:39.972748 kernel: ima: No architecture policies found Jul 7 00:09:39.972756 kernel: clk: Disabling unused clocks Jul 7 00:09:39.972763 kernel: Warning: unable to open an initial console. Jul 7 00:09:39.972771 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:09:39.972779 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:09:39.972786 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:09:39.972796 kernel: Run /init as init process Jul 7 00:09:39.972803 kernel: with arguments: Jul 7 00:09:39.972811 kernel: /init Jul 7 00:09:39.972818 kernel: with environment: Jul 7 00:09:39.972826 kernel: HOME=/ Jul 7 00:09:39.972833 kernel: TERM=linux Jul 7 00:09:39.972840 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:09:39.972852 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:09:39.972866 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:09:39.972887 systemd[1]: Detected virtualization kvm. Jul 7 00:09:39.972895 systemd[1]: Detected architecture x86-64. Jul 7 00:09:39.972903 systemd[1]: Running in initrd. Jul 7 00:09:39.972911 systemd[1]: No hostname configured, using default hostname. Jul 7 00:09:39.972920 systemd[1]: Hostname set to . Jul 7 00:09:39.972930 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:09:39.972939 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:09:39.972947 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:39.972956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:39.972965 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:09:39.972973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:39.972981 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:09:39.972993 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:09:39.973002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:09:39.973011 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:09:39.973019 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:39.973028 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:39.973036 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:39.973044 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:39.973052 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:39.973064 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:39.973073 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:39.973092 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:39.973101 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:09:39.973112 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:09:39.973120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:39.973128 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:39.973137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:39.973147 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:39.973155 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:09:39.973164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:39.973172 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:09:39.973181 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:09:39.973193 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:09:39.973202 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:39.973210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:39.973218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:39.973227 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:39.973236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:39.973246 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:09:39.973255 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:09:39.973283 systemd-journald[220]: Collecting audit messages is disabled. Jul 7 00:09:39.973304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:09:39.973313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:39.973322 systemd-journald[220]: Journal started Jul 7 00:09:39.973343 systemd-journald[220]: Runtime Journal (/run/log/journal/e3eaa6b9784d43568276c23ad076e514) is 6M, max 48.6M, 42.5M free. Jul 7 00:09:39.963828 systemd-modules-load[221]: Inserted module 'overlay' Jul 7 00:09:40.006592 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:40.006610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:09:40.006621 kernel: Bridge firewalling registered Jul 7 00:09:39.997566 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 7 00:09:40.007102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:40.008850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:40.010127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:40.013667 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:40.016161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:40.019085 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:40.027630 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:09:40.030820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:40.032494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:40.035276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:40.046998 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:40.049479 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:09:40.082720 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:09:40.085479 systemd-resolved[252]: Positive Trust Anchors: Jul 7 00:09:40.085496 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:40.085526 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:40.088245 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 7 00:09:40.089457 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:40.090242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:40.207711 kernel: SCSI subsystem initialized Jul 7 00:09:40.216707 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:09:40.227700 kernel: iscsi: registered transport (tcp) Jul 7 00:09:40.253708 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:09:40.253791 kernel: QLogic iSCSI HBA Driver Jul 7 00:09:40.277907 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:09:40.307495 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:09:40.308354 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:09:40.369513 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:40.373349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:09:40.447715 kernel: raid6: avx2x4 gen() 23690 MB/s Jul 7 00:09:40.464711 kernel: raid6: avx2x2 gen() 28352 MB/s Jul 7 00:09:40.481791 kernel: raid6: avx2x1 gen() 25199 MB/s Jul 7 00:09:40.481823 kernel: raid6: using algorithm avx2x2 gen() 28352 MB/s Jul 7 00:09:40.499795 kernel: raid6: .... xor() 19418 MB/s, rmw enabled Jul 7 00:09:40.499838 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:09:40.520703 kernel: xor: automatically using best checksumming function avx Jul 7 00:09:40.816710 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:09:40.825772 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:40.827703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:40.860598 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 7 00:09:40.866452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:40.870344 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:09:40.903407 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 7 00:09:40.933779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:40.935493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:41.008161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:41.012630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:09:41.043748 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 00:09:41.047703 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 00:09:41.056974 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:09:41.057021 kernel: GPT:9289727 != 19775487 Jul 7 00:09:41.057036 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:09:41.057064 kernel: GPT:9289727 != 19775487 Jul 7 00:09:41.057077 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:09:41.057091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:09:41.057791 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:09:41.064688 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 00:09:41.079730 kernel: AES CTR mode by8 optimization enabled Jul 7 00:09:41.079790 kernel: libata version 3.00 loaded. Jul 7 00:09:41.093182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:41.093307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:41.097206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:41.101877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:41.104639 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:09:41.106266 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:09:41.107703 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:09:41.109964 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 00:09:41.110179 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 00:09:41.110366 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:09:41.187701 kernel: scsi host0: ahci Jul 7 00:09:41.189779 kernel: scsi host1: ahci Jul 7 00:09:41.202720 kernel: scsi host2: ahci Jul 7 00:09:41.206697 kernel: scsi host3: ahci Jul 7 00:09:41.206895 kernel: scsi host4: ahci Jul 7 00:09:41.209566 kernel: scsi host5: ahci Jul 7 00:09:41.209780 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 7 00:09:41.209809 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 7 00:09:41.212713 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 7 00:09:41.212770 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 7 00:09:41.212781 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 7 00:09:41.213253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:09:41.217545 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 7 00:09:41.230721 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:09:41.260930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:41.268600 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:09:41.269064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:09:41.277702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:09:41.280179 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:09:41.316505 disk-uuid[635]: Primary Header is updated. Jul 7 00:09:41.316505 disk-uuid[635]: Secondary Entries is updated. Jul 7 00:09:41.316505 disk-uuid[635]: Secondary Header is updated. Jul 7 00:09:41.320734 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:09:41.324707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:09:41.622702 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:41.622752 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:41.622771 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:41.623711 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:09:41.624942 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:09:41.625039 kernel: ata3.00: applying bridge limits Jul 7 00:09:41.625707 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:41.626711 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:41.627701 kernel: ata3.00: configured for UDMA/100 Jul 7 00:09:41.629701 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:09:41.694712 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:09:41.695020 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:09:41.721139 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:09:42.124273 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:42.127040 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:42.129476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:42.131696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:42.134473 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:09:42.153519 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:42.390513 disk-uuid[636]: The operation has completed successfully. Jul 7 00:09:42.392088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:09:42.432833 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:09:42.432953 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:09:42.460104 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:09:42.490318 sh[665]: Success Jul 7 00:09:42.513763 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:09:42.513806 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:09:42.513826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:09:42.523698 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 00:09:42.559461 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:09:42.563615 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:09:42.584057 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:09:42.591639 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:09:42.591733 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) Jul 7 00:09:42.594221 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:09:42.594245 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:42.594258 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:09:42.599854 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:09:42.600828 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:09:42.602882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:09:42.603699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:09:42.605577 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:09:42.629715 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Jul 7 00:09:42.629788 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:09:42.632207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:42.632237 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:09:42.639700 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:09:42.640621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:09:42.642643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:09:42.756583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:42.760502 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:42.771205 ignition[755]: Ignition 2.21.0 Jul 7 00:09:42.771735 ignition[755]: Stage: fetch-offline Jul 7 00:09:42.771772 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:42.771780 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:09:42.772055 ignition[755]: parsed url from cmdline: "" Jul 7 00:09:42.772060 ignition[755]: no config URL provided Jul 7 00:09:42.772065 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:09:42.772075 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:09:42.772103 ignition[755]: op(1): [started] loading QEMU firmware config module Jul 7 00:09:42.772108 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 00:09:42.782638 ignition[755]: op(1): [finished] loading QEMU firmware config module Jul 7 00:09:42.812249 systemd-networkd[853]: lo: Link UP Jul 7 00:09:42.812259 systemd-networkd[853]: lo: Gained carrier Jul 7 00:09:42.814076 systemd-networkd[853]: Enumeration completed Jul 7 00:09:42.814162 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:42.815177 systemd[1]: Reached target network.target - Network. Jul 7 00:09:42.815449 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:42.815453 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:42.817489 systemd-networkd[853]: eth0: Link UP Jul 7 00:09:42.817493 systemd-networkd[853]: eth0: Gained carrier Jul 7 00:09:42.817502 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:42.836333 ignition[755]: parsing config with SHA512: 6249b49a8ac63d6866c6cd9df1ab8be49e152414f7713df46b0ba4ce7d1ff0f6f91a2266043197788c8b449480c2603ac0b28919d30179633e8c2eebb5d9056d Jul 7 00:09:42.840421 unknown[755]: fetched base config from "system" Jul 7 00:09:42.840434 unknown[755]: fetched user config from "qemu" Jul 7 00:09:42.840848 ignition[755]: fetch-offline: fetch-offline passed Jul 7 00:09:42.841727 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:09:42.840904 ignition[755]: Ignition finished successfully Jul 7 00:09:42.843962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:42.845116 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:09:42.845987 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:09:42.942432 ignition[860]: Ignition 2.21.0 Jul 7 00:09:42.942446 ignition[860]: Stage: kargs Jul 7 00:09:42.942616 ignition[860]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:42.942629 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:09:42.946408 ignition[860]: kargs: kargs passed Jul 7 00:09:42.948002 ignition[860]: Ignition finished successfully Jul 7 00:09:42.952370 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:09:42.954399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:09:43.002976 ignition[868]: Ignition 2.21.0 Jul 7 00:09:43.003000 ignition[868]: Stage: disks Jul 7 00:09:43.003156 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:43.003168 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:09:43.006809 ignition[868]: disks: disks passed Jul 7 00:09:43.006895 ignition[868]: Ignition finished successfully Jul 7 00:09:43.011116 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:09:43.013298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:43.013933 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:09:43.014290 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:43.014646 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:43.015194 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:43.016663 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:09:43.047743 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:09:43.192825 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:09:43.195717 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:09:43.364708 kernel: EXT4-fs (vda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:09:43.365333 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:09:43.366555 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:09:43.369051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:43.372189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:09:43.372977 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:09:43.373034 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:09:43.373062 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:43.387829 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:09:43.391247 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:09:43.458576 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Jul 7 00:09:43.460968 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:09:43.461006 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:43.461019 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:09:43.467258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:43.501549 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:09:43.506105 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:09:43.511079 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:09:43.515333 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:09:43.602983 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:43.630563 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:09:43.633428 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:09:43.659615 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:09:43.660930 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:09:43.675856 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:09:43.698805 ignition[1001]: INFO : Ignition 2.21.0 Jul 7 00:09:43.698805 ignition[1001]: INFO : Stage: mount Jul 7 00:09:43.700619 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:43.700619 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:09:43.703551 ignition[1001]: INFO : mount: mount passed Jul 7 00:09:43.704359 ignition[1001]: INFO : Ignition finished successfully Jul 7 00:09:43.708069 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:09:43.710388 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:09:43.733206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:43.763121 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 7 00:09:43.763146 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:09:43.763157 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:43.763943 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:09:43.768252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:43.806695 ignition[1031]: INFO : Ignition 2.21.0 Jul 7 00:09:43.806695 ignition[1031]: INFO : Stage: files Jul 7 00:09:43.808574 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:43.808574 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:09:43.811070 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:09:43.811070 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:09:43.811070 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:09:43.815441 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:09:43.815441 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:09:43.815441 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:09:43.813820 unknown[1031]: wrote ssh authorized keys file for user: core Jul 7 00:09:43.821023 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:09:43.821023 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:09:43.859327 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:09:44.091923 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:09:44.091923 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:09:44.095775 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:09:44.106856 systemd-networkd[853]: eth0: Gained IPv6LL Jul 7 00:09:44.575978 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:09:44.653510 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:09:44.653510 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:44.657380 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:44.669552 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:44.669552 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:44.669552 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:44.669552 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:44.669552 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:44.669552 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:09:45.307544 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:09:45.909964 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:45.909964 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:09:45.914556 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:45.917119 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:45.917119 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:09:45.917119 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 00:09:45.917119 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:09:45.925051 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:09:45.925051 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 00:09:45.925051 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 00:09:45.942176 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:09:45.946407 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:09:45.948446 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 00:09:45.948446 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:45.951900 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:45.951900 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:45.951900 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:45.951900 ignition[1031]: INFO : files: files passed Jul 7 00:09:45.951900 ignition[1031]: INFO : Ignition finished successfully Jul 7 00:09:45.953159 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:09:45.959856 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:09:45.965097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:09:45.984389 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:09:45.984644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:09:45.991659 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 00:09:45.998699 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:45.998699 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.005271 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.010437 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:46.011279 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:09:46.015771 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:09:46.097034 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:09:46.097197 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:09:46.098336 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:09:46.100721 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:09:46.101103 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:09:46.104250 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:09:46.141877 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:46.145841 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:09:46.168920 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:46.171377 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:46.172781 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:09:46.174715 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:09:46.174935 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:46.177025 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:09:46.178743 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:09:46.180728 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:09:46.182737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:46.184640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:46.186740 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:09:46.189011 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:09:46.191066 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:46.193337 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:09:46.195271 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:09:46.197380 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:09:46.199081 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:09:46.199223 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:46.201364 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:46.202988 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:46.205024 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:09:46.205202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:46.207197 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:09:46.207360 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:46.209437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:09:46.209554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:46.211498 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:09:46.213209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:09:46.218786 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:46.220844 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:09:46.222685 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:09:46.225083 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:09:46.225187 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:46.226841 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:09:46.226933 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:46.228727 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:09:46.228849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:46.230980 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:09:46.231087 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:09:46.232482 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:09:46.233941 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:09:46.234103 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:46.236400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:09:46.237973 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:09:46.238084 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:46.238358 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:09:46.238450 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:46.247657 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:09:46.249294 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:09:46.271634 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:09:46.344803 ignition[1086]: INFO : Ignition 2.21.0 Jul 7 00:09:46.344803 ignition[1086]: INFO : Stage: umount Jul 7 00:09:46.346867 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:46.346867 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:09:46.346867 ignition[1086]: INFO : umount: umount passed Jul 7 00:09:46.346867 ignition[1086]: INFO : Ignition finished successfully Jul 7 00:09:46.349693 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:09:46.349867 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:09:46.351620 systemd[1]: Stopped target network.target - Network. Jul 7 00:09:46.353224 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:09:46.353312 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:09:46.353557 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:09:46.353616 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:09:46.354068 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:09:46.354137 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:09:46.354368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:09:46.354420 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:46.354851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:09:46.355322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:09:46.378061 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:09:46.378200 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:09:46.382341 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:09:46.382583 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:09:46.382707 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:09:46.386332 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:09:46.387253 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:09:46.389159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:09:46.389205 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:46.391504 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:09:46.392580 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:09:46.392634 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:46.393233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:09:46.393278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:46.398064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:09:46.398113 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:46.398574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:09:46.398617 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:46.403709 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:46.407081 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:09:46.407151 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:09:46.427515 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:09:46.431870 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:46.432474 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:09:46.432523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:46.437369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:09:46.437429 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:46.437686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:09:46.437741 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:46.440355 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:09:46.440409 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:46.442967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:46.443013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:46.446681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:09:46.447166 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:09:46.447215 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:09:46.451011 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:09:46.451065 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:46.454184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:46.454242 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:46.458928 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:09:46.458986 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:09:46.459035 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:09:46.459379 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:09:46.505929 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:09:46.513930 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:09:46.514059 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:09:46.834181 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:09:46.834349 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:09:46.836657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:09:46.837129 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:09:46.837199 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:46.838715 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:09:46.863347 systemd[1]: Switching root. Jul 7 00:09:46.938168 systemd-journald[220]: Journal stopped Jul 7 00:09:48.816568 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 7 00:09:48.816642 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:09:48.816657 kernel: SELinux: policy capability open_perms=1 Jul 7 00:09:48.817145 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:09:48.817175 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:09:48.817190 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:09:48.817204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:09:48.817218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:09:48.817232 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:09:48.817274 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:09:48.817286 kernel: audit: type=1403 audit(1751846987.899:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:09:48.817313 systemd[1]: Successfully loaded SELinux policy in 49.171ms. Jul 7 00:09:48.817336 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.663ms. Jul 7 00:09:48.817349 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:09:48.817364 systemd[1]: Detected virtualization kvm. Jul 7 00:09:48.817376 systemd[1]: Detected architecture x86-64. Jul 7 00:09:48.817388 systemd[1]: Detected first boot. Jul 7 00:09:48.817400 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:09:48.817419 zram_generator::config[1131]: No configuration found. Jul 7 00:09:48.817435 kernel: Guest personality initialized and is inactive Jul 7 00:09:48.817446 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:09:48.817457 kernel: Initialized host personality Jul 7 00:09:48.819727 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:09:48.819744 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:09:48.819758 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:09:48.819770 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:09:48.819782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:09:48.819804 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:09:48.819816 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:09:48.819835 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:09:48.819847 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:09:48.819858 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:09:48.819870 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:09:48.819882 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:09:48.819895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:09:48.819914 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:09:48.819926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:48.819939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:48.819951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:09:48.819963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:09:48.819975 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:09:48.819987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:48.819998 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:09:48.820018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:48.820030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:48.820043 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:09:48.820055 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:09:48.820066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:09:48.820079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:09:48.820090 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:48.820102 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:48.820114 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:48.820132 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:48.820149 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:09:48.820161 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:09:48.820179 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:09:48.820190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:48.820202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:48.820214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:48.820225 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:09:48.820237 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:09:48.820255 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:09:48.820267 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:09:48.820279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:48.820291 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:09:48.820303 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:09:48.820314 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:09:48.820327 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:09:48.820344 systemd[1]: Reached target machines.target - Containers. Jul 7 00:09:48.820362 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:09:48.820374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:48.820386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:48.820398 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:09:48.820409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:48.820421 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:09:48.820433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:48.820451 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:09:48.820463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:48.820482 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:09:48.820493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:09:48.820505 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:09:48.820517 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:09:48.820528 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:09:48.820541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:09:48.820553 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:48.820564 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:48.820583 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:09:48.820595 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:09:48.820609 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:09:48.820621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:48.820633 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:09:48.820651 systemd[1]: Stopped verity-setup.service. Jul 7 00:09:48.820664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:48.820689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:09:48.820700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:09:48.820712 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:09:48.820731 kernel: fuse: init (API version 7.41) Jul 7 00:09:48.820754 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:09:48.820766 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:09:48.820778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:09:48.820789 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:48.820801 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:09:48.820846 systemd-journald[1187]: Collecting audit messages is disabled. Jul 7 00:09:48.820873 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:09:48.820894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:48.820906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:48.820918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:48.820930 systemd-journald[1187]: Journal started Jul 7 00:09:48.820952 systemd-journald[1187]: Runtime Journal (/run/log/journal/e3eaa6b9784d43568276c23ad076e514) is 6M, max 48.6M, 42.5M free. Jul 7 00:09:48.548313 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:09:48.574727 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:09:48.575208 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:09:48.829793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:48.829888 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:48.828530 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:09:48.828758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:09:48.830761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:48.833844 kernel: loop: module loaded Jul 7 00:09:48.832662 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:09:48.834245 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:09:48.836061 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:48.836344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:48.838149 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:09:48.852825 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:09:48.862316 kernel: ACPI: bus type drm_connector registered Jul 7 00:09:48.859786 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:09:48.863768 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:09:48.864977 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:09:48.865003 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:48.867031 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:09:48.871770 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:09:48.872889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:48.875015 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:09:48.877467 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:09:48.878663 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:48.880592 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:09:48.882930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:48.885793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:48.900574 systemd-journald[1187]: Time spent on flushing to /var/log/journal/e3eaa6b9784d43568276c23ad076e514 is 22.253ms for 976 entries. Jul 7 00:09:48.900574 systemd-journald[1187]: System Journal (/var/log/journal/e3eaa6b9784d43568276c23ad076e514) is 8M, max 195.6M, 187.6M free. Jul 7 00:09:48.936930 systemd-journald[1187]: Received client request to flush runtime journal. Jul 7 00:09:48.936982 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 00:09:48.891364 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:09:48.894596 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:09:48.898089 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:09:48.898300 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:09:48.899603 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:09:48.905985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:09:48.907549 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:09:48.915870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:09:48.920859 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:09:48.923453 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:09:48.990717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:09:48.995602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:48.998637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:09:49.009285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:49.011740 kernel: loop1: detected capacity change from 0 to 224512 Jul 7 00:09:49.019718 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:09:49.031664 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:09:49.034435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:49.039912 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 00:09:49.069123 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 00:09:49.076084 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 7 00:09:49.076102 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 7 00:09:49.080727 kernel: loop4: detected capacity change from 0 to 224512 Jul 7 00:09:49.087929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:49.095689 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 00:09:49.100788 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 00:09:49.101344 (sd-merge)[1272]: Merged extensions into '/usr'. Jul 7 00:09:49.105747 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:09:49.105764 systemd[1]: Reloading... Jul 7 00:09:49.259710 zram_generator::config[1299]: No configuration found. Jul 7 00:09:49.380037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:09:49.442046 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:09:49.462500 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:09:49.463136 systemd[1]: Reloading finished in 356 ms. Jul 7 00:09:49.483569 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:09:49.502486 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:09:49.583739 systemd[1]: Starting ensure-sysext.service... Jul 7 00:09:49.586001 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:49.606096 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:09:49.606117 systemd[1]: Reloading... Jul 7 00:09:49.624624 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:09:49.624661 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:09:49.625363 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:09:49.625639 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:09:49.626570 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:09:49.626865 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Jul 7 00:09:49.626938 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Jul 7 00:09:49.631369 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:09:49.631388 systemd-tmpfiles[1338]: Skipping /boot Jul 7 00:09:49.650950 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:09:49.651092 systemd-tmpfiles[1338]: Skipping /boot Jul 7 00:09:49.676701 zram_generator::config[1365]: No configuration found. Jul 7 00:09:49.767933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:09:49.849420 systemd[1]: Reloading finished in 242 ms. Jul 7 00:09:49.873331 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:09:49.901139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:49.909524 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:09:49.912057 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:09:49.914531 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:09:49.925263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:49.929658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:49.932991 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:09:49.939902 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:49.940137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:49.945491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:49.948700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:49.955759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:49.957352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:49.957554 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:09:49.964264 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:09:49.966735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:49.968541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:09:49.971360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:49.971591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:49.973696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:49.973957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:49.984595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:49.984883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:49.993276 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:09:50.009520 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:09:50.012928 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Jul 7 00:09:50.014359 augenrules[1439]: No rules Jul 7 00:09:50.014991 systemd[1]: Finished ensure-sysext.service. Jul 7 00:09:50.016490 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:09:50.016781 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:09:50.021120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:50.021284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:50.022936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:50.025356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:09:50.028802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:50.032806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:50.034083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:50.034122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:09:50.044406 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:09:50.047833 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:09:50.049071 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:09:50.049110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:50.049805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:09:50.051254 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:50.055136 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:50.055371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:50.057085 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:09:50.058380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:09:50.060247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:50.066933 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:50.071121 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:50.071341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:50.103934 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:09:50.110750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:50.111984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:50.112049 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:50.143541 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:09:50.188757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:09:50.196079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:09:50.401703 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:09:50.404710 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:09:50.410703 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:09:50.416879 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:09:50.417173 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:09:50.414898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:09:50.499915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:50.552643 systemd-resolved[1407]: Positive Trust Anchors: Jul 7 00:09:50.552663 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:50.552775 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:50.558759 systemd-resolved[1407]: Defaulting to hostname 'linux'. Jul 7 00:09:50.561746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:50.562374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:50.569054 kernel: kvm_amd: TSC scaling supported Jul 7 00:09:50.569093 kernel: kvm_amd: Nested Virtualization enabled Jul 7 00:09:50.569106 kernel: kvm_amd: Nested Paging enabled Jul 7 00:09:50.569119 kernel: kvm_amd: LBR virtualization supported Jul 7 00:09:50.570196 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 00:09:50.570218 kernel: kvm_amd: Virtual GIF supported Jul 7 00:09:50.635705 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:09:50.647433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:50.658922 systemd-networkd[1489]: lo: Link UP Jul 7 00:09:50.659286 systemd-networkd[1489]: lo: Gained carrier Jul 7 00:09:50.660685 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:09:50.661096 systemd-networkd[1489]: Enumeration completed Jul 7 00:09:50.661522 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:50.661526 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:50.662108 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:50.662840 systemd-networkd[1489]: eth0: Link UP Jul 7 00:09:50.663069 systemd-networkd[1489]: eth0: Gained carrier Jul 7 00:09:50.663096 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:50.663309 systemd[1]: Reached target network.target - Network. Jul 7 00:09:50.664334 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:50.665524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:09:50.666972 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:09:50.668243 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:09:50.669405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:09:50.670692 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:09:50.670717 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:50.671967 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:09:50.673176 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:09:50.674359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:09:50.675593 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:50.677927 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:09:50.679738 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:09:50.680678 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:09:50.681169 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Jul 7 00:09:50.681875 systemd-timesyncd[1450]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 00:09:50.681918 systemd-timesyncd[1450]: Initial clock synchronization to Mon 2025-07-07 00:09:50.993390 UTC. Jul 7 00:09:50.685145 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:09:50.686620 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:09:50.687862 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:09:50.691907 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:09:50.693397 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:09:50.695902 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:09:50.698222 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:09:50.700001 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:09:50.700990 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:50.702058 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:50.703066 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:09:50.703089 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:09:50.704039 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:09:50.706086 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:09:50.710825 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:09:50.713345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:09:50.715809 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:09:50.716846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:09:50.718101 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:09:50.720222 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:09:50.722508 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:09:50.724726 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:09:50.727180 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:09:50.734315 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:09:50.736714 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:09:50.738622 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:09:50.739420 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:09:50.743881 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:09:50.748341 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:09:50.792285 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Refreshing passwd entry cache Jul 7 00:09:50.791461 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:09:50.790864 oslogin_cache_refresh[1536]: Refreshing passwd entry cache Jul 7 00:09:50.791949 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:09:50.794411 jq[1534]: false Jul 7 00:09:50.795532 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:09:50.795559 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:09:50.800439 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:09:50.804245 jq[1543]: true Jul 7 00:09:50.810419 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Failure getting users, quitting Jul 7 00:09:50.810419 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:09:50.810419 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Refreshing group entry cache Jul 7 00:09:50.809897 oslogin_cache_refresh[1536]: Failure getting users, quitting Jul 7 00:09:50.809923 oslogin_cache_refresh[1536]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:09:50.809975 oslogin_cache_refresh[1536]: Refreshing group entry cache Jul 7 00:09:50.816027 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Failure getting groups, quitting Jul 7 00:09:50.816027 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:09:50.815143 oslogin_cache_refresh[1536]: Failure getting groups, quitting Jul 7 00:09:50.815153 oslogin_cache_refresh[1536]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:09:50.822701 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:09:50.824913 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:09:50.826618 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:09:50.829279 extend-filesystems[1535]: Found /dev/vda6 Jul 7 00:09:50.833411 jq[1554]: true Jul 7 00:09:50.836512 update_engine[1542]: I20250707 00:09:50.836416 1542 main.cc:92] Flatcar Update Engine starting Jul 7 00:09:50.839434 extend-filesystems[1535]: Found /dev/vda9 Jul 7 00:09:50.846740 extend-filesystems[1535]: Checking size of /dev/vda9 Jul 7 00:09:50.847977 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:09:50.848464 tar[1549]: linux-amd64/LICENSE Jul 7 00:09:50.848814 tar[1549]: linux-amd64/helm Jul 7 00:09:50.850258 dbus-daemon[1532]: [system] SELinux support is enabled Jul 7 00:09:50.854546 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:09:50.855927 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:09:50.865924 update_engine[1542]: I20250707 00:09:50.865762 1542 update_check_scheduler.cc:74] Next update check in 3m37s Jul 7 00:09:50.866221 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:09:50.866256 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:09:50.867659 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:09:50.867701 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:09:50.869111 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:09:50.873134 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:09:50.879162 extend-filesystems[1535]: Resized partition /dev/vda9 Jul 7 00:09:50.888895 extend-filesystems[1586]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:09:50.898694 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 00:09:50.925692 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 00:09:50.928758 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:09:50.928802 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:09:50.933313 systemd-logind[1541]: New seat seat0. Jul 7 00:09:50.957049 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:09:50.957049 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:09:50.957049 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 00:09:50.998329 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jul 7 00:09:51.000177 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:09:50.970807 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:09:50.971138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:09:50.996582 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:09:50.998858 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:09:51.000760 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:09:51.011950 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:09:51.167093 containerd[1548]: time="2025-07-07T00:09:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:09:51.167743 containerd[1548]: time="2025-07-07T00:09:51.167692795Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:09:51.178735 containerd[1548]: time="2025-07-07T00:09:51.178686784Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.076µs" Jul 7 00:09:51.178735 containerd[1548]: time="2025-07-07T00:09:51.178724787Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:09:51.178845 containerd[1548]: time="2025-07-07T00:09:51.178741795Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:09:51.178960 containerd[1548]: time="2025-07-07T00:09:51.178939421Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:09:51.178960 containerd[1548]: time="2025-07-07T00:09:51.178956450Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:09:51.179004 containerd[1548]: time="2025-07-07T00:09:51.178978600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179075 containerd[1548]: time="2025-07-07T00:09:51.179054918Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179075 containerd[1548]: time="2025-07-07T00:09:51.179068690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179364 containerd[1548]: time="2025-07-07T00:09:51.179334046Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179364 containerd[1548]: time="2025-07-07T00:09:51.179350149Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179364 containerd[1548]: time="2025-07-07T00:09:51.179360715Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179436 containerd[1548]: time="2025-07-07T00:09:51.179368792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179495 containerd[1548]: time="2025-07-07T00:09:51.179468396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179882 containerd[1548]: time="2025-07-07T00:09:51.179843265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179882 containerd[1548]: time="2025-07-07T00:09:51.179876678Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:09:51.179943 containerd[1548]: time="2025-07-07T00:09:51.179886192Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:09:51.179943 containerd[1548]: time="2025-07-07T00:09:51.179914036Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:09:51.180258 containerd[1548]: time="2025-07-07T00:09:51.180229210Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:09:51.180325 containerd[1548]: time="2025-07-07T00:09:51.180307861Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:09:51.188698 containerd[1548]: time="2025-07-07T00:09:51.188649541Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:09:51.188741 containerd[1548]: time="2025-07-07T00:09:51.188707384Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:09:51.188741 containerd[1548]: time="2025-07-07T00:09:51.188720989Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:09:51.188741 containerd[1548]: time="2025-07-07T00:09:51.188731731Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:09:51.188795 containerd[1548]: time="2025-07-07T00:09:51.188742723Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:09:51.188795 containerd[1548]: time="2025-07-07T00:09:51.188751883Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:09:51.188795 containerd[1548]: time="2025-07-07T00:09:51.188763374Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:09:51.188795 containerd[1548]: time="2025-07-07T00:09:51.188773440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:09:51.188795 containerd[1548]: time="2025-07-07T00:09:51.188794070Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:09:51.188904 containerd[1548]: time="2025-07-07T00:09:51.188804042Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:09:51.188904 containerd[1548]: time="2025-07-07T00:09:51.188812380Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:09:51.188904 containerd[1548]: time="2025-07-07T00:09:51.188823694Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:09:51.188973 containerd[1548]: time="2025-07-07T00:09:51.188951017Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:09:51.188999 containerd[1548]: time="2025-07-07T00:09:51.188973418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:09:51.188999 containerd[1548]: time="2025-07-07T00:09:51.188986074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:09:51.188999 containerd[1548]: time="2025-07-07T00:09:51.188995599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:09:51.189051 containerd[1548]: time="2025-07-07T00:09:51.189004957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:09:51.189051 containerd[1548]: time="2025-07-07T00:09:51.189015043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:09:51.189051 containerd[1548]: time="2025-07-07T00:09:51.189025920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:09:51.189051 containerd[1548]: time="2025-07-07T00:09:51.189035871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:09:51.189160 containerd[1548]: time="2025-07-07T00:09:51.189058834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:09:51.189160 containerd[1548]: time="2025-07-07T00:09:51.189069181Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:09:51.189160 containerd[1548]: time="2025-07-07T00:09:51.189094932Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:09:51.189232 containerd[1548]: time="2025-07-07T00:09:51.189171054Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:09:51.189232 containerd[1548]: time="2025-07-07T00:09:51.189186146Z" level=info msg="Start snapshots syncer" Jul 7 00:09:51.189232 containerd[1548]: time="2025-07-07T00:09:51.189209733Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:09:51.189541 containerd[1548]: time="2025-07-07T00:09:51.189500176Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:09:51.189757 containerd[1548]: time="2025-07-07T00:09:51.189563962Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:09:51.189757 containerd[1548]: time="2025-07-07T00:09:51.189635076Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:09:51.189910 containerd[1548]: time="2025-07-07T00:09:51.189890763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:09:51.189995 containerd[1548]: time="2025-07-07T00:09:51.189969423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:09:51.189995 containerd[1548]: time="2025-07-07T00:09:51.189987380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:09:51.190046 containerd[1548]: time="2025-07-07T00:09:51.189997194Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:09:51.190046 containerd[1548]: time="2025-07-07T00:09:51.190008062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:09:51.190046 containerd[1548]: time="2025-07-07T00:09:51.190018034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:09:51.190046 containerd[1548]: time="2025-07-07T00:09:51.190028068Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:09:51.190123 containerd[1548]: time="2025-07-07T00:09:51.190053362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:09:51.190123 containerd[1548]: time="2025-07-07T00:09:51.190063552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:09:51.190123 containerd[1548]: time="2025-07-07T00:09:51.190073326Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:09:51.190123 containerd[1548]: time="2025-07-07T00:09:51.190108498Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:09:51.190123 containerd[1548]: time="2025-07-07T00:09:51.190120250Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190128910Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190137467Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190144691Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190167101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190184994Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190202127Z" level=info msg="runtime interface created" Jul 7 00:09:51.190215 containerd[1548]: time="2025-07-07T00:09:51.190207509Z" level=info msg="created NRI interface" Jul 7 00:09:51.190353 containerd[1548]: time="2025-07-07T00:09:51.190228816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:09:51.190353 containerd[1548]: time="2025-07-07T00:09:51.190239350Z" level=info msg="Connect containerd service" Jul 7 00:09:51.190353 containerd[1548]: time="2025-07-07T00:09:51.190261354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:09:51.191392 containerd[1548]: time="2025-07-07T00:09:51.191362325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:09:51.328933 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:09:51.387197 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:09:51.390743 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:09:51.419128 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:09:51.419437 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:09:51.427882 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:09:51.462106 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:09:51.466410 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:09:51.470775 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:09:51.472207 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:09:51.489505 containerd[1548]: time="2025-07-07T00:09:51.489424265Z" level=info msg="Start subscribing containerd event" Jul 7 00:09:51.489655 containerd[1548]: time="2025-07-07T00:09:51.489532123Z" level=info msg="Start recovering state" Jul 7 00:09:51.489768 containerd[1548]: time="2025-07-07T00:09:51.489759592Z" level=info msg="Start event monitor" Jul 7 00:09:51.489794 containerd[1548]: time="2025-07-07T00:09:51.489770854Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:09:51.489794 containerd[1548]: time="2025-07-07T00:09:51.489787571Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:09:51.489866 containerd[1548]: time="2025-07-07T00:09:51.489798011Z" level=info msg="Start streaming server" Jul 7 00:09:51.489866 containerd[1548]: time="2025-07-07T00:09:51.489813031Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:09:51.489866 containerd[1548]: time="2025-07-07T00:09:51.489822368Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:09:51.489866 containerd[1548]: time="2025-07-07T00:09:51.489825959Z" level=info msg="runtime interface starting up..." Jul 7 00:09:51.489866 containerd[1548]: time="2025-07-07T00:09:51.489856885Z" level=info msg="starting plugins..." Jul 7 00:09:51.490047 containerd[1548]: time="2025-07-07T00:09:51.489883354Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:09:51.490153 containerd[1548]: time="2025-07-07T00:09:51.490067292Z" level=info msg="containerd successfully booted in 0.323550s" Jul 7 00:09:51.490396 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:09:51.623501 tar[1549]: linux-amd64/README.md Jul 7 00:09:51.659879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:09:52.684023 systemd-networkd[1489]: eth0: Gained IPv6LL Jul 7 00:09:52.688006 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:09:52.689872 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:09:52.692546 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 00:09:52.695029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:09:52.697585 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:09:52.723828 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:09:52.736436 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 00:09:52.736763 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 00:09:52.738883 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:09:53.050249 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:09:53.052826 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Jul 7 00:09:53.158485 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:09:53.160827 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:09:53.169798 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:09:53.172531 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:09:53.181948 systemd-logind[1541]: New session 1 of user core. Jul 7 00:09:53.207295 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:09:53.212947 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:09:53.238313 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:09:53.242165 systemd-logind[1541]: New session c1 of user core. Jul 7 00:09:53.413939 systemd[1666]: Queued start job for default target default.target. Jul 7 00:09:53.424037 systemd[1666]: Created slice app.slice - User Application Slice. Jul 7 00:09:53.424067 systemd[1666]: Reached target paths.target - Paths. Jul 7 00:09:53.424120 systemd[1666]: Reached target timers.target - Timers. Jul 7 00:09:53.425752 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:09:53.437984 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:09:53.438127 systemd[1666]: Reached target sockets.target - Sockets. Jul 7 00:09:53.438180 systemd[1666]: Reached target basic.target - Basic System. Jul 7 00:09:53.438241 systemd[1666]: Reached target default.target - Main User Target. Jul 7 00:09:53.438281 systemd[1666]: Startup finished in 186ms. Jul 7 00:09:53.438694 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:09:53.446834 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:09:53.518094 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). Jul 7 00:09:53.578466 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:09:53.580324 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:09:53.584942 systemd-logind[1541]: New session 2 of user core. Jul 7 00:09:53.598857 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:09:53.658173 sshd[1679]: Connection closed by 10.0.0.1 port 42460 Jul 7 00:09:53.658598 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jul 7 00:09:53.673557 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:42460.service: Deactivated successfully. Jul 7 00:09:53.675891 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:09:53.676592 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:09:53.680088 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:42464.service - OpenSSH per-connection server daemon (10.0.0.1:42464). Jul 7 00:09:53.720934 systemd-logind[1541]: Removed session 2. Jul 7 00:09:53.779893 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 42464 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:09:53.783919 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:09:53.792297 systemd-logind[1541]: New session 3 of user core. Jul 7 00:09:53.794498 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:09:53.854621 sshd[1687]: Connection closed by 10.0.0.1 port 42464 Jul 7 00:09:53.854947 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Jul 7 00:09:53.859569 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:42464.service: Deactivated successfully. Jul 7 00:09:53.861751 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:09:53.862972 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:09:53.864199 systemd-logind[1541]: Removed session 3. Jul 7 00:09:54.120846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:09:54.122489 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:09:54.123862 systemd[1]: Startup finished in 3.393s (kernel) + 8.218s (initrd) + 6.272s (userspace) = 17.884s. Jul 7 00:09:54.147027 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:09:55.050291 kubelet[1697]: E0707 00:09:55.050210 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:09:55.054620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:09:55.054906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:09:55.056153 systemd[1]: kubelet.service: Consumed 2.040s CPU time, 266.1M memory peak. Jul 7 00:10:04.026751 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:49536.service - OpenSSH per-connection server daemon (10.0.0.1:49536). Jul 7 00:10:04.074281 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 49536 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:04.075797 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:04.080352 systemd-logind[1541]: New session 4 of user core. Jul 7 00:10:04.093810 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:10:04.146263 sshd[1712]: Connection closed by 10.0.0.1 port 49536 Jul 7 00:10:04.146581 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:04.161019 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:49536.service: Deactivated successfully. Jul 7 00:10:04.162757 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:10:04.163490 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:10:04.166266 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:49552.service - OpenSSH per-connection server daemon (10.0.0.1:49552). Jul 7 00:10:04.166906 systemd-logind[1541]: Removed session 4. Jul 7 00:10:04.222831 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 49552 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:04.224292 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:04.228774 systemd-logind[1541]: New session 5 of user core. Jul 7 00:10:04.238838 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:10:04.287940 sshd[1720]: Connection closed by 10.0.0.1 port 49552 Jul 7 00:10:04.288492 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:04.297057 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:49552.service: Deactivated successfully. Jul 7 00:10:04.298829 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:10:04.299581 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:10:04.302422 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:49560.service - OpenSSH per-connection server daemon (10.0.0.1:49560). Jul 7 00:10:04.303026 systemd-logind[1541]: Removed session 5. Jul 7 00:10:04.358395 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 49560 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:04.359836 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:04.364238 systemd-logind[1541]: New session 6 of user core. Jul 7 00:10:04.374803 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:10:04.427547 sshd[1728]: Connection closed by 10.0.0.1 port 49560 Jul 7 00:10:04.427863 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:04.441233 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:49560.service: Deactivated successfully. Jul 7 00:10:04.442921 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:10:04.443739 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:10:04.446649 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:49562.service - OpenSSH per-connection server daemon (10.0.0.1:49562). Jul 7 00:10:04.447322 systemd-logind[1541]: Removed session 6. Jul 7 00:10:04.497484 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 49562 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:04.499025 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:04.503307 systemd-logind[1541]: New session 7 of user core. Jul 7 00:10:04.517792 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:10:04.575313 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:10:04.575649 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:04.595979 sudo[1737]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:04.597792 sshd[1736]: Connection closed by 10.0.0.1 port 49562 Jul 7 00:10:04.598190 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:04.610600 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:49562.service: Deactivated successfully. Jul 7 00:10:04.612537 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:10:04.613390 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:10:04.616323 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:49564.service - OpenSSH per-connection server daemon (10.0.0.1:49564). Jul 7 00:10:04.617128 systemd-logind[1541]: Removed session 7. Jul 7 00:10:04.675235 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 49564 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:04.676563 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:04.681116 systemd-logind[1541]: New session 8 of user core. Jul 7 00:10:04.690817 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:10:04.744442 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:10:04.744780 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:04.750725 sudo[1747]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:04.757177 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:10:04.757485 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:04.767855 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:10:04.818609 augenrules[1769]: No rules Jul 7 00:10:04.820480 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:10:04.820828 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:10:04.822081 sudo[1746]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:04.823539 sshd[1745]: Connection closed by 10.0.0.1 port 49564 Jul 7 00:10:04.823869 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:04.836893 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:49564.service: Deactivated successfully. Jul 7 00:10:04.838916 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:10:04.839706 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:10:04.843212 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:49566.service - OpenSSH per-connection server daemon (10.0.0.1:49566). Jul 7 00:10:04.844022 systemd-logind[1541]: Removed session 8. Jul 7 00:10:04.891492 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 49566 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:04.892871 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:04.897215 systemd-logind[1541]: New session 9 of user core. Jul 7 00:10:04.912840 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:10:04.966435 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:10:04.966795 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:05.209999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:10:05.211853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:05.765342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:05.784064 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:05.856759 kubelet[1808]: E0707 00:10:05.856698 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:05.863255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:05.863451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:05.863835 systemd[1]: kubelet.service: Consumed 369ms CPU time, 110.8M memory peak. Jul 7 00:10:05.872138 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:10:05.894983 (dockerd)[1817]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:10:06.427273 dockerd[1817]: time="2025-07-07T00:10:06.427165710Z" level=info msg="Starting up" Jul 7 00:10:06.429198 dockerd[1817]: time="2025-07-07T00:10:06.429173281Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:10:06.763079 dockerd[1817]: time="2025-07-07T00:10:06.763019926Z" level=info msg="Loading containers: start." Jul 7 00:10:06.773715 kernel: Initializing XFRM netlink socket Jul 7 00:10:07.020882 systemd-networkd[1489]: docker0: Link UP Jul 7 00:10:07.027057 dockerd[1817]: time="2025-07-07T00:10:07.027012233Z" level=info msg="Loading containers: done." Jul 7 00:10:07.044969 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3999029289-merged.mount: Deactivated successfully. Jul 7 00:10:07.047393 dockerd[1817]: time="2025-07-07T00:10:07.047354556Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:10:07.047465 dockerd[1817]: time="2025-07-07T00:10:07.047439735Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:10:07.047593 dockerd[1817]: time="2025-07-07T00:10:07.047567720Z" level=info msg="Initializing buildkit" Jul 7 00:10:07.078293 dockerd[1817]: time="2025-07-07T00:10:07.078226792Z" level=info msg="Completed buildkit initialization" Jul 7 00:10:07.082578 dockerd[1817]: time="2025-07-07T00:10:07.082532391Z" level=info msg="Daemon has completed initialization" Jul 7 00:10:07.082650 dockerd[1817]: time="2025-07-07T00:10:07.082599534Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:10:07.082811 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:10:07.908763 containerd[1548]: time="2025-07-07T00:10:07.908697102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:10:08.567839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637235929.mount: Deactivated successfully. Jul 7 00:10:09.710878 containerd[1548]: time="2025-07-07T00:10:09.710813865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:09.711532 containerd[1548]: time="2025-07-07T00:10:09.711465118Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 00:10:09.713155 containerd[1548]: time="2025-07-07T00:10:09.713099738Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:09.716733 containerd[1548]: time="2025-07-07T00:10:09.716643264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:09.719325 containerd[1548]: time="2025-07-07T00:10:09.719278737Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.810523698s" Jul 7 00:10:09.719438 containerd[1548]: time="2025-07-07T00:10:09.719414126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:10:09.721065 containerd[1548]: time="2025-07-07T00:10:09.721015739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:10:11.128691 containerd[1548]: time="2025-07-07T00:10:11.128605758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:11.129448 containerd[1548]: time="2025-07-07T00:10:11.129389702Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 00:10:11.130809 containerd[1548]: time="2025-07-07T00:10:11.130761628Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:11.134060 containerd[1548]: time="2025-07-07T00:10:11.134025981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:11.135221 containerd[1548]: time="2025-07-07T00:10:11.135160594Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.414092111s" Jul 7 00:10:11.135274 containerd[1548]: time="2025-07-07T00:10:11.135214449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:10:11.135835 containerd[1548]: time="2025-07-07T00:10:11.135799929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:10:12.916347 containerd[1548]: time="2025-07-07T00:10:12.916265521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:12.917039 containerd[1548]: time="2025-07-07T00:10:12.916978462Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 00:10:12.918280 containerd[1548]: time="2025-07-07T00:10:12.918224858Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:12.920835 containerd[1548]: time="2025-07-07T00:10:12.920799686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:12.921611 containerd[1548]: time="2025-07-07T00:10:12.921576909Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.785523869s" Jul 7 00:10:12.921611 containerd[1548]: time="2025-07-07T00:10:12.921609567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:10:12.922226 containerd[1548]: time="2025-07-07T00:10:12.922188282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:10:14.166053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044363460.mount: Deactivated successfully. Jul 7 00:10:15.090487 containerd[1548]: time="2025-07-07T00:10:15.090419150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.091434 containerd[1548]: time="2025-07-07T00:10:15.091382765Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 00:10:15.092578 containerd[1548]: time="2025-07-07T00:10:15.092520251Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.094263 containerd[1548]: time="2025-07-07T00:10:15.094224802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.094782 containerd[1548]: time="2025-07-07T00:10:15.094740843Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.172520089s" Jul 7 00:10:15.094782 containerd[1548]: time="2025-07-07T00:10:15.094775352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:10:15.095215 containerd[1548]: time="2025-07-07T00:10:15.095188839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:10:15.836322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013891678.mount: Deactivated successfully. Jul 7 00:10:15.959836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:10:15.962079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:16.173327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:16.177595 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:16.455900 kubelet[2118]: E0707 00:10:16.455723 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:16.459914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:16.460107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:16.460492 systemd[1]: kubelet.service: Consumed 300ms CPU time, 110.8M memory peak. Jul 7 00:10:16.877339 containerd[1548]: time="2025-07-07T00:10:16.877278935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:16.878038 containerd[1548]: time="2025-07-07T00:10:16.878014338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:10:16.879355 containerd[1548]: time="2025-07-07T00:10:16.879317519Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:16.881936 containerd[1548]: time="2025-07-07T00:10:16.881870226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:16.882752 containerd[1548]: time="2025-07-07T00:10:16.882709306Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.787489552s" Jul 7 00:10:16.882752 containerd[1548]: time="2025-07-07T00:10:16.882748032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:10:16.883416 containerd[1548]: time="2025-07-07T00:10:16.883251942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:10:17.392570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144971266.mount: Deactivated successfully. Jul 7 00:10:17.398953 containerd[1548]: time="2025-07-07T00:10:17.398917036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:17.399819 containerd[1548]: time="2025-07-07T00:10:17.399780607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:10:17.401047 containerd[1548]: time="2025-07-07T00:10:17.401012053Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:17.402902 containerd[1548]: time="2025-07-07T00:10:17.402871183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:17.403474 containerd[1548]: time="2025-07-07T00:10:17.403435151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 520.156031ms" Jul 7 00:10:17.403507 containerd[1548]: time="2025-07-07T00:10:17.403473018Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:10:17.404014 containerd[1548]: time="2025-07-07T00:10:17.403994776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:10:17.989210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36789111.mount: Deactivated successfully. Jul 7 00:10:20.581627 containerd[1548]: time="2025-07-07T00:10:20.581564668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:20.582720 containerd[1548]: time="2025-07-07T00:10:20.582687853Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 00:10:20.584013 containerd[1548]: time="2025-07-07T00:10:20.583938911Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:20.586635 containerd[1548]: time="2025-07-07T00:10:20.586589636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:20.587821 containerd[1548]: time="2025-07-07T00:10:20.587776804Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.183752119s" Jul 7 00:10:20.587872 containerd[1548]: time="2025-07-07T00:10:20.587822637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:10:23.012587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:23.012786 systemd[1]: kubelet.service: Consumed 300ms CPU time, 110.8M memory peak. Jul 7 00:10:23.015491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:23.049315 systemd[1]: Reload requested from client PID 2252 ('systemctl') (unit session-9.scope)... Jul 7 00:10:23.049351 systemd[1]: Reloading... Jul 7 00:10:23.142722 zram_generator::config[2296]: No configuration found. Jul 7 00:10:23.344537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:10:23.461499 systemd[1]: Reloading finished in 411 ms. Jul 7 00:10:23.528356 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:10:23.528457 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:10:23.528772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:23.528822 systemd[1]: kubelet.service: Consumed 193ms CPU time, 98.3M memory peak. Jul 7 00:10:23.530376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:23.690647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:23.700015 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:10:23.742426 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:23.742426 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:10:23.742426 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:23.742919 kubelet[2344]: I0707 00:10:23.742492 2344 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:10:24.211940 kubelet[2344]: I0707 00:10:24.211886 2344 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:10:24.211940 kubelet[2344]: I0707 00:10:24.211921 2344 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:10:24.212249 kubelet[2344]: I0707 00:10:24.212222 2344 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:10:24.237780 kubelet[2344]: I0707 00:10:24.237733 2344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:10:24.238162 kubelet[2344]: E0707 00:10:24.238132 2344 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.245628 kubelet[2344]: I0707 00:10:24.245602 2344 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:10:24.250831 kubelet[2344]: I0707 00:10:24.250809 2344 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:10:24.252320 kubelet[2344]: I0707 00:10:24.252276 2344 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:10:24.252529 kubelet[2344]: I0707 00:10:24.252312 2344 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:10:24.252633 kubelet[2344]: I0707 00:10:24.252544 2344 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:10:24.252633 kubelet[2344]: I0707 00:10:24.252553 2344 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:10:24.252774 kubelet[2344]: I0707 00:10:24.252760 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:24.256571 kubelet[2344]: I0707 00:10:24.256547 2344 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:10:24.256608 kubelet[2344]: I0707 00:10:24.256585 2344 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:10:24.256639 kubelet[2344]: I0707 00:10:24.256616 2344 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:10:24.256639 kubelet[2344]: I0707 00:10:24.256632 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:10:24.259259 kubelet[2344]: W0707 00:10:24.259194 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 7 00:10:24.259259 kubelet[2344]: W0707 00:10:24.259197 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 7 00:10:24.259333 kubelet[2344]: E0707 00:10:24.259269 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.259333 kubelet[2344]: E0707 00:10:24.259285 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.260051 kubelet[2344]: I0707 00:10:24.260000 2344 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:10:24.260488 kubelet[2344]: I0707 00:10:24.260466 2344 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:10:24.261385 kubelet[2344]: W0707 00:10:24.261356 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:10:24.263583 kubelet[2344]: I0707 00:10:24.263541 2344 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:10:24.263583 kubelet[2344]: I0707 00:10:24.263593 2344 server.go:1287] "Started kubelet" Jul 7 00:10:24.263898 kubelet[2344]: I0707 00:10:24.263838 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:10:24.264367 kubelet[2344]: I0707 00:10:24.264350 2344 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:10:24.264505 kubelet[2344]: I0707 00:10:24.264484 2344 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:10:24.265882 kubelet[2344]: I0707 00:10:24.265659 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:10:24.265882 kubelet[2344]: I0707 00:10:24.265808 2344 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:10:24.267106 kubelet[2344]: I0707 00:10:24.266514 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:10:24.267106 kubelet[2344]: E0707 00:10:24.266835 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:24.267106 kubelet[2344]: I0707 00:10:24.266862 2344 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:10:24.267106 kubelet[2344]: I0707 00:10:24.267030 2344 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:10:24.267106 kubelet[2344]: I0707 00:10:24.267091 2344 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:10:24.267518 kubelet[2344]: W0707 00:10:24.267325 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 7 00:10:24.267518 kubelet[2344]: E0707 00:10:24.267374 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.267619 kubelet[2344]: E0707 00:10:24.267585 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jul 7 00:10:24.268301 kubelet[2344]: I0707 00:10:24.267964 2344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:10:24.268469 kubelet[2344]: E0707 00:10:24.268440 2344 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:10:24.269135 kubelet[2344]: I0707 00:10:24.269104 2344 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:10:24.269135 kubelet[2344]: I0707 00:10:24.269129 2344 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:10:24.270717 kubelet[2344]: E0707 00:10:24.268730 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcf963e0e054b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:10:24.263562571 +0000 UTC m=+0.559530090,LastTimestamp:2025-07-07 00:10:24.263562571 +0000 UTC m=+0.559530090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:10:24.290755 kubelet[2344]: I0707 00:10:24.290724 2344 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:10:24.290755 kubelet[2344]: I0707 00:10:24.290742 2344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:10:24.290755 kubelet[2344]: I0707 00:10:24.290766 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:24.367810 kubelet[2344]: E0707 00:10:24.367784 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:24.468138 kubelet[2344]: E0707 00:10:24.468030 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:24.468539 kubelet[2344]: E0707 00:10:24.468499 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jul 7 00:10:24.568878 kubelet[2344]: E0707 00:10:24.568823 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:24.642072 kubelet[2344]: I0707 00:10:24.642026 2344 policy_none.go:49] "None policy: Start" Jul 7 00:10:24.642072 kubelet[2344]: I0707 00:10:24.642082 2344 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:10:24.642278 kubelet[2344]: I0707 00:10:24.642124 2344 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:10:24.647745 kubelet[2344]: I0707 00:10:24.647701 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:10:24.649696 kubelet[2344]: I0707 00:10:24.649399 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:10:24.649696 kubelet[2344]: I0707 00:10:24.649465 2344 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:10:24.649696 kubelet[2344]: I0707 00:10:24.649522 2344 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:10:24.649696 kubelet[2344]: I0707 00:10:24.649538 2344 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:10:24.649696 kubelet[2344]: E0707 00:10:24.649639 2344 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:10:24.650448 kubelet[2344]: W0707 00:10:24.650418 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 7 00:10:24.650498 kubelet[2344]: E0707 00:10:24.650455 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.651981 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:10:24.665236 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:10:24.669015 kubelet[2344]: E0707 00:10:24.668992 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:24.669893 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:10:24.685065 kubelet[2344]: I0707 00:10:24.685020 2344 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:10:24.685366 kubelet[2344]: I0707 00:10:24.685349 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:10:24.685412 kubelet[2344]: I0707 00:10:24.685370 2344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:10:24.686227 kubelet[2344]: I0707 00:10:24.686154 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:10:24.686854 kubelet[2344]: E0707 00:10:24.686839 2344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:10:24.686945 kubelet[2344]: E0707 00:10:24.686934 2344 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 00:10:24.758851 systemd[1]: Created slice kubepods-burstable-pod81e481258d734b5b61be5baf8f41834e.slice - libcontainer container kubepods-burstable-pod81e481258d734b5b61be5baf8f41834e.slice. Jul 7 00:10:24.770117 kubelet[2344]: I0707 00:10:24.770068 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81e481258d734b5b61be5baf8f41834e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"81e481258d734b5b61be5baf8f41834e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:24.770434 kubelet[2344]: I0707 00:10:24.770119 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:24.770434 kubelet[2344]: I0707 00:10:24.770144 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:24.770434 kubelet[2344]: I0707 00:10:24.770170 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:24.770434 kubelet[2344]: I0707 00:10:24.770195 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:24.770434 kubelet[2344]: I0707 00:10:24.770219 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81e481258d734b5b61be5baf8f41834e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"81e481258d734b5b61be5baf8f41834e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:24.770557 kubelet[2344]: I0707 00:10:24.770238 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81e481258d734b5b61be5baf8f41834e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"81e481258d734b5b61be5baf8f41834e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:24.770557 kubelet[2344]: I0707 00:10:24.770261 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:24.770557 kubelet[2344]: I0707 00:10:24.770286 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:24.773493 kubelet[2344]: E0707 00:10:24.773467 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:24.776884 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 7 00:10:24.784817 kubelet[2344]: E0707 00:10:24.784782 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:24.786398 kubelet[2344]: I0707 00:10:24.786356 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:10:24.786878 kubelet[2344]: E0707 00:10:24.786798 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 7 00:10:24.787744 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 7 00:10:24.789481 kubelet[2344]: E0707 00:10:24.789443 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:24.869123 kubelet[2344]: E0707 00:10:24.869067 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jul 7 00:10:24.989032 kubelet[2344]: I0707 00:10:24.988980 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:10:24.989452 kubelet[2344]: E0707 00:10:24.989402 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 7 00:10:25.074174 kubelet[2344]: E0707 00:10:25.074014 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.074754 containerd[1548]: time="2025-07-07T00:10:25.074699088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:81e481258d734b5b61be5baf8f41834e,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:25.086019 kubelet[2344]: E0707 00:10:25.085977 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.086450 containerd[1548]: time="2025-07-07T00:10:25.086418902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:25.091606 kubelet[2344]: E0707 00:10:25.090803 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.091693 containerd[1548]: time="2025-07-07T00:10:25.091202293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:25.106533 containerd[1548]: time="2025-07-07T00:10:25.106495520Z" level=info msg="connecting to shim 5b783e7f7eec24d94f5d969289d6f0e3e3ff62647ea1768b1b47d31e507b24ec" address="unix:///run/containerd/s/7c052508e10a74b2095ce49679789d49bd4b36ee2aacc21796966d7e01e681e5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:25.116693 containerd[1548]: time="2025-07-07T00:10:25.116633078Z" level=info msg="connecting to shim 76cea4a6faa16f003b42c5ce315f1deae52fe0eeffac95907f9cdc88491cced3" address="unix:///run/containerd/s/342d148534000bb4985cceef9e3ae5a7572d23fde380171cc98c136751032249" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:25.133254 containerd[1548]: time="2025-07-07T00:10:25.133198402Z" level=info msg="connecting to shim d33af3991458e2ddf57a81d0efeb3e93c799b1b43a1129788a164575ab0776d5" address="unix:///run/containerd/s/366131a669021dc8f1bfc76596a7f5717381d893db26063820ca1a79df64ac26" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:25.162855 systemd[1]: Started cri-containerd-5b783e7f7eec24d94f5d969289d6f0e3e3ff62647ea1768b1b47d31e507b24ec.scope - libcontainer container 5b783e7f7eec24d94f5d969289d6f0e3e3ff62647ea1768b1b47d31e507b24ec. Jul 7 00:10:25.167815 systemd[1]: Started cri-containerd-76cea4a6faa16f003b42c5ce315f1deae52fe0eeffac95907f9cdc88491cced3.scope - libcontainer container 76cea4a6faa16f003b42c5ce315f1deae52fe0eeffac95907f9cdc88491cced3. Jul 7 00:10:25.205160 systemd[1]: Started cri-containerd-d33af3991458e2ddf57a81d0efeb3e93c799b1b43a1129788a164575ab0776d5.scope - libcontainer container d33af3991458e2ddf57a81d0efeb3e93c799b1b43a1129788a164575ab0776d5. Jul 7 00:10:25.226871 kubelet[2344]: W0707 00:10:25.226789 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 7 00:10:25.227659 kubelet[2344]: E0707 00:10:25.226881 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:25.228517 containerd[1548]: time="2025-07-07T00:10:25.228477354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:81e481258d734b5b61be5baf8f41834e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b783e7f7eec24d94f5d969289d6f0e3e3ff62647ea1768b1b47d31e507b24ec\"" Jul 7 00:10:25.230218 kubelet[2344]: E0707 00:10:25.230191 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.232932 containerd[1548]: time="2025-07-07T00:10:25.232897672Z" level=info msg="CreateContainer within sandbox \"5b783e7f7eec24d94f5d969289d6f0e3e3ff62647ea1768b1b47d31e507b24ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:10:25.244564 containerd[1548]: time="2025-07-07T00:10:25.244519284Z" level=info msg="Container ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:25.251972 containerd[1548]: time="2025-07-07T00:10:25.251940193Z" level=info msg="CreateContainer within sandbox \"5b783e7f7eec24d94f5d969289d6f0e3e3ff62647ea1768b1b47d31e507b24ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9\"" Jul 7 00:10:25.253474 containerd[1548]: time="2025-07-07T00:10:25.252619779Z" level=info msg="StartContainer for \"ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9\"" Jul 7 00:10:25.253917 containerd[1548]: time="2025-07-07T00:10:25.253882613Z" level=info msg="connecting to shim ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9" address="unix:///run/containerd/s/7c052508e10a74b2095ce49679789d49bd4b36ee2aacc21796966d7e01e681e5" protocol=ttrpc version=3 Jul 7 00:10:25.255899 containerd[1548]: time="2025-07-07T00:10:25.255791809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"76cea4a6faa16f003b42c5ce315f1deae52fe0eeffac95907f9cdc88491cced3\"" Jul 7 00:10:25.256682 kubelet[2344]: E0707 00:10:25.256618 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.258970 containerd[1548]: time="2025-07-07T00:10:25.258931256Z" level=info msg="CreateContainer within sandbox \"76cea4a6faa16f003b42c5ce315f1deae52fe0eeffac95907f9cdc88491cced3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:10:25.269717 containerd[1548]: time="2025-07-07T00:10:25.269636984Z" level=info msg="Container 235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:25.282574 containerd[1548]: time="2025-07-07T00:10:25.282501530Z" level=info msg="CreateContainer within sandbox \"76cea4a6faa16f003b42c5ce315f1deae52fe0eeffac95907f9cdc88491cced3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c\"" Jul 7 00:10:25.284487 containerd[1548]: time="2025-07-07T00:10:25.284438648Z" level=info msg="StartContainer for \"235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c\"" Jul 7 00:10:25.286245 containerd[1548]: time="2025-07-07T00:10:25.286186388Z" level=info msg="connecting to shim 235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c" address="unix:///run/containerd/s/342d148534000bb4985cceef9e3ae5a7572d23fde380171cc98c136751032249" protocol=ttrpc version=3 Jul 7 00:10:25.290366 containerd[1548]: time="2025-07-07T00:10:25.290333264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d33af3991458e2ddf57a81d0efeb3e93c799b1b43a1129788a164575ab0776d5\"" Jul 7 00:10:25.291647 kubelet[2344]: E0707 00:10:25.291541 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.296385 containerd[1548]: time="2025-07-07T00:10:25.295778305Z" level=info msg="CreateContainer within sandbox \"d33af3991458e2ddf57a81d0efeb3e93c799b1b43a1129788a164575ab0776d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:10:25.295882 systemd[1]: Started cri-containerd-ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9.scope - libcontainer container ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9. Jul 7 00:10:25.307354 containerd[1548]: time="2025-07-07T00:10:25.307229389Z" level=info msg="Container b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:25.312835 systemd[1]: Started cri-containerd-235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c.scope - libcontainer container 235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c. Jul 7 00:10:25.314853 containerd[1548]: time="2025-07-07T00:10:25.314821336Z" level=info msg="CreateContainer within sandbox \"d33af3991458e2ddf57a81d0efeb3e93c799b1b43a1129788a164575ab0776d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d\"" Jul 7 00:10:25.315498 containerd[1548]: time="2025-07-07T00:10:25.315326996Z" level=info msg="StartContainer for \"b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d\"" Jul 7 00:10:25.316419 containerd[1548]: time="2025-07-07T00:10:25.316379591Z" level=info msg="connecting to shim b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d" address="unix:///run/containerd/s/366131a669021dc8f1bfc76596a7f5717381d893db26063820ca1a79df64ac26" protocol=ttrpc version=3 Jul 7 00:10:25.368867 systemd[1]: Started cri-containerd-b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d.scope - libcontainer container b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d. Jul 7 00:10:25.391405 kubelet[2344]: I0707 00:10:25.391366 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:10:25.392769 kubelet[2344]: E0707 00:10:25.392743 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 7 00:10:25.404206 kubelet[2344]: W0707 00:10:25.404092 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 7 00:10:25.404357 kubelet[2344]: E0707 00:10:25.404220 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:25.523999 containerd[1548]: time="2025-07-07T00:10:25.523850431Z" level=info msg="StartContainer for \"ffd031ecfd9275d81d00995f3253a6f16158f4924bc6609571b42071a7e5b9a9\" returns successfully" Jul 7 00:10:25.524972 containerd[1548]: time="2025-07-07T00:10:25.524953927Z" level=info msg="StartContainer for \"b45aa1922def43c4b5cc0153884a87393fc851a1115fd1fe039306fcfd94dd7d\" returns successfully" Jul 7 00:10:25.525581 containerd[1548]: time="2025-07-07T00:10:25.525549738Z" level=info msg="StartContainer for \"235bb3085bac78d6f7940af5060106df304b19f140a619f1f1c949068cbb084c\" returns successfully" Jul 7 00:10:25.659165 kubelet[2344]: E0707 00:10:25.659057 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:25.659264 kubelet[2344]: E0707 00:10:25.659174 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.661573 kubelet[2344]: E0707 00:10:25.661555 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:25.661655 kubelet[2344]: E0707 00:10:25.661640 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:25.664115 kubelet[2344]: E0707 00:10:25.664097 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:25.664211 kubelet[2344]: E0707 00:10:25.664196 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:26.196165 kubelet[2344]: I0707 00:10:26.195725 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:10:26.659046 kubelet[2344]: E0707 00:10:26.659003 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 00:10:26.666324 kubelet[2344]: E0707 00:10:26.666304 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:26.666442 kubelet[2344]: E0707 00:10:26.666428 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:26.666528 kubelet[2344]: E0707 00:10:26.666303 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:10:26.666588 kubelet[2344]: E0707 00:10:26.666531 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:26.740832 kubelet[2344]: I0707 00:10:26.740781 2344 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:10:26.740832 kubelet[2344]: E0707 00:10:26.740821 2344 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 00:10:26.763040 kubelet[2344]: E0707 00:10:26.762993 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:26.863910 kubelet[2344]: E0707 00:10:26.863845 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:26.964195 kubelet[2344]: E0707 00:10:26.964155 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:27.065054 kubelet[2344]: E0707 00:10:27.065007 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:27.165714 kubelet[2344]: E0707 00:10:27.165617 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:27.266088 kubelet[2344]: E0707 00:10:27.265952 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:27.366662 kubelet[2344]: E0707 00:10:27.366614 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:27.467369 kubelet[2344]: E0707 00:10:27.467309 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:27.568088 kubelet[2344]: I0707 00:10:27.567974 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:27.573041 kubelet[2344]: E0707 00:10:27.573005 2344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:27.573041 kubelet[2344]: I0707 00:10:27.573030 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:27.574837 kubelet[2344]: E0707 00:10:27.574814 2344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:27.574837 kubelet[2344]: I0707 00:10:27.574834 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:27.576090 kubelet[2344]: E0707 00:10:27.576062 2344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:27.666142 kubelet[2344]: I0707 00:10:27.666081 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:27.672189 kubelet[2344]: E0707 00:10:27.672092 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:28.260289 kubelet[2344]: I0707 00:10:28.260238 2344 apiserver.go:52] "Watching apiserver" Jul 7 00:10:28.267223 kubelet[2344]: I0707 00:10:28.267182 2344 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:10:28.659708 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-9.scope)... Jul 7 00:10:28.659729 systemd[1]: Reloading... Jul 7 00:10:28.668621 kubelet[2344]: E0707 00:10:28.668584 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:28.740719 zram_generator::config[2665]: No configuration found. Jul 7 00:10:28.857041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:10:28.990565 systemd[1]: Reloading finished in 330 ms. Jul 7 00:10:29.016692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:29.028117 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:10:29.028440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:29.028490 systemd[1]: kubelet.service: Consumed 1.078s CPU time, 134M memory peak. Jul 7 00:10:29.030455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:29.226130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:29.230218 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:10:29.274127 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:29.274127 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:10:29.274127 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:29.274127 kubelet[2707]: I0707 00:10:29.274076 2707 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:10:29.281701 kubelet[2707]: I0707 00:10:29.281653 2707 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:10:29.281701 kubelet[2707]: I0707 00:10:29.281688 2707 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:10:29.281924 kubelet[2707]: I0707 00:10:29.281901 2707 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:10:29.282995 kubelet[2707]: I0707 00:10:29.282972 2707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:10:29.285032 kubelet[2707]: I0707 00:10:29.284974 2707 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:10:29.288582 kubelet[2707]: I0707 00:10:29.288546 2707 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:10:29.296599 kubelet[2707]: I0707 00:10:29.296552 2707 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:10:29.296873 kubelet[2707]: I0707 00:10:29.296827 2707 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:10:29.297056 kubelet[2707]: I0707 00:10:29.296861 2707 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:10:29.297260 kubelet[2707]: I0707 00:10:29.297061 2707 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:10:29.297260 kubelet[2707]: I0707 00:10:29.297069 2707 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:10:29.297260 kubelet[2707]: I0707 00:10:29.297124 2707 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:29.297358 kubelet[2707]: I0707 00:10:29.297290 2707 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:10:29.297358 kubelet[2707]: I0707 00:10:29.297310 2707 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:10:29.297358 kubelet[2707]: I0707 00:10:29.297331 2707 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:10:29.297358 kubelet[2707]: I0707 00:10:29.297341 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:10:29.298129 kubelet[2707]: I0707 00:10:29.298100 2707 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:10:29.298458 kubelet[2707]: I0707 00:10:29.298429 2707 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:10:29.298898 kubelet[2707]: I0707 00:10:29.298879 2707 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:10:29.299006 kubelet[2707]: I0707 00:10:29.298914 2707 server.go:1287] "Started kubelet" Jul 7 00:10:29.301337 kubelet[2707]: I0707 00:10:29.301231 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:10:29.301688 kubelet[2707]: I0707 00:10:29.301625 2707 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:10:29.301771 kubelet[2707]: I0707 00:10:29.301706 2707 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:10:29.304697 kubelet[2707]: I0707 00:10:29.304315 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:10:29.304697 kubelet[2707]: I0707 00:10:29.304684 2707 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:10:29.305272 kubelet[2707]: I0707 00:10:29.305244 2707 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:10:29.308696 kubelet[2707]: E0707 00:10:29.308071 2707 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:10:29.308696 kubelet[2707]: I0707 00:10:29.308278 2707 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:10:29.308962 kubelet[2707]: I0707 00:10:29.308892 2707 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:10:29.309186 kubelet[2707]: I0707 00:10:29.309144 2707 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:10:29.314709 kubelet[2707]: E0707 00:10:29.314654 2707 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:10:29.314876 kubelet[2707]: I0707 00:10:29.314858 2707 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:10:29.315285 kubelet[2707]: I0707 00:10:29.315264 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:10:29.317504 kubelet[2707]: I0707 00:10:29.317488 2707 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:10:29.321516 kubelet[2707]: I0707 00:10:29.321468 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:10:29.322807 kubelet[2707]: I0707 00:10:29.322774 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:10:29.322807 kubelet[2707]: I0707 00:10:29.322805 2707 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:10:29.322990 kubelet[2707]: I0707 00:10:29.322831 2707 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:10:29.322990 kubelet[2707]: I0707 00:10:29.322838 2707 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:10:29.322990 kubelet[2707]: E0707 00:10:29.322885 2707 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:10:29.372003 kubelet[2707]: I0707 00:10:29.371969 2707 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:10:29.372003 kubelet[2707]: I0707 00:10:29.371987 2707 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:10:29.372003 kubelet[2707]: I0707 00:10:29.372008 2707 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:29.372210 kubelet[2707]: I0707 00:10:29.372174 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:10:29.372210 kubelet[2707]: I0707 00:10:29.372184 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:10:29.372210 kubelet[2707]: I0707 00:10:29.372204 2707 policy_none.go:49] "None policy: Start" Jul 7 00:10:29.372289 kubelet[2707]: I0707 00:10:29.372217 2707 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:10:29.372289 kubelet[2707]: I0707 00:10:29.372229 2707 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:10:29.372333 kubelet[2707]: I0707 00:10:29.372327 2707 state_mem.go:75] "Updated machine memory state" Jul 7 00:10:29.379648 kubelet[2707]: I0707 00:10:29.379611 2707 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:10:29.379896 kubelet[2707]: I0707 00:10:29.379868 2707 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:10:29.379984 kubelet[2707]: I0707 00:10:29.379889 2707 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:10:29.380551 kubelet[2707]: I0707 00:10:29.380088 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:10:29.381276 kubelet[2707]: E0707 00:10:29.381252 2707 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:10:29.423631 kubelet[2707]: I0707 00:10:29.423563 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:29.423631 kubelet[2707]: I0707 00:10:29.423601 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:29.423840 kubelet[2707]: I0707 00:10:29.423739 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:29.431076 kubelet[2707]: E0707 00:10:29.431046 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:29.485245 kubelet[2707]: I0707 00:10:29.485189 2707 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:10:29.491513 kubelet[2707]: I0707 00:10:29.491481 2707 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 00:10:29.491659 kubelet[2707]: I0707 00:10:29.491579 2707 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:10:29.510370 kubelet[2707]: I0707 00:10:29.510328 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:29.510370 kubelet[2707]: I0707 00:10:29.510361 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:29.510370 kubelet[2707]: I0707 00:10:29.510381 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81e481258d734b5b61be5baf8f41834e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"81e481258d734b5b61be5baf8f41834e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:29.510602 kubelet[2707]: I0707 00:10:29.510399 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:29.510602 kubelet[2707]: I0707 00:10:29.510414 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:29.510602 kubelet[2707]: I0707 00:10:29.510428 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:29.510602 kubelet[2707]: I0707 00:10:29.510476 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:10:29.510602 kubelet[2707]: I0707 00:10:29.510523 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81e481258d734b5b61be5baf8f41834e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"81e481258d734b5b61be5baf8f41834e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:29.510785 kubelet[2707]: I0707 00:10:29.510546 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81e481258d734b5b61be5baf8f41834e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"81e481258d734b5b61be5baf8f41834e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:29.633083 sudo[2744]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:10:29.633412 sudo[2744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:10:29.731055 kubelet[2707]: E0707 00:10:29.730795 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:29.731055 kubelet[2707]: E0707 00:10:29.730798 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:29.732093 kubelet[2707]: E0707 00:10:29.732056 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:30.105510 sudo[2744]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:30.298259 kubelet[2707]: I0707 00:10:30.298195 2707 apiserver.go:52] "Watching apiserver" Jul 7 00:10:30.309269 kubelet[2707]: I0707 00:10:30.309227 2707 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:10:30.336243 kubelet[2707]: I0707 00:10:30.335877 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:30.336243 kubelet[2707]: I0707 00:10:30.335992 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:30.336243 kubelet[2707]: E0707 00:10:30.336248 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:30.343192 kubelet[2707]: E0707 00:10:30.343151 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:10:30.343341 kubelet[2707]: E0707 00:10:30.343293 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:30.344037 kubelet[2707]: E0707 00:10:30.344002 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 00:10:30.344210 kubelet[2707]: E0707 00:10:30.344118 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:30.380976 kubelet[2707]: I0707 00:10:30.380551 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.380500994 podStartE2EDuration="3.380500994s" podCreationTimestamp="2025-07-07 00:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:30.380369017 +0000 UTC m=+1.146260671" watchObservedRunningTime="2025-07-07 00:10:30.380500994 +0000 UTC m=+1.146392648" Jul 7 00:10:30.388178 kubelet[2707]: I0707 00:10:30.387748 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.387735773 podStartE2EDuration="1.387735773s" podCreationTimestamp="2025-07-07 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:30.387474325 +0000 UTC m=+1.153365979" watchObservedRunningTime="2025-07-07 00:10:30.387735773 +0000 UTC m=+1.153627417" Jul 7 00:10:30.393690 kubelet[2707]: I0707 00:10:30.393623 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.393599309 podStartE2EDuration="1.393599309s" podCreationTimestamp="2025-07-07 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:30.393536497 +0000 UTC m=+1.159428142" watchObservedRunningTime="2025-07-07 00:10:30.393599309 +0000 UTC m=+1.159490963" Jul 7 00:10:31.337278 kubelet[2707]: E0707 00:10:31.337233 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:31.337764 kubelet[2707]: E0707 00:10:31.337509 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:31.446334 sudo[1781]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:31.447775 sshd[1780]: Connection closed by 10.0.0.1 port 49566 Jul 7 00:10:31.448179 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:31.453308 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:49566.service: Deactivated successfully. Jul 7 00:10:31.456371 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:10:31.456653 systemd[1]: session-9.scope: Consumed 5.055s CPU time, 258M memory peak. Jul 7 00:10:31.458393 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:10:31.460165 systemd-logind[1541]: Removed session 9. Jul 7 00:10:35.525440 kubelet[2707]: I0707 00:10:35.525384 2707 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:10:35.525937 containerd[1548]: time="2025-07-07T00:10:35.525849257Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:10:35.526218 kubelet[2707]: I0707 00:10:35.526046 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:10:36.243095 update_engine[1542]: I20250707 00:10:36.242978 1542 update_attempter.cc:509] Updating boot flags... Jul 7 00:10:36.511038 systemd[1]: Created slice kubepods-besteffort-podb49b7c4d_0e91_46f8_b01b_ee32e881787b.slice - libcontainer container kubepods-besteffort-podb49b7c4d_0e91_46f8_b01b_ee32e881787b.slice. Jul 7 00:10:36.514466 systemd[1]: Created slice kubepods-burstable-podce567480_1348_435b_8dbd_3c311e0e0c9d.slice - libcontainer container kubepods-burstable-podce567480_1348_435b_8dbd_3c311e0e0c9d.slice. Jul 7 00:10:36.554699 kubelet[2707]: I0707 00:10:36.553946 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-hostproc\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.554699 kubelet[2707]: I0707 00:10:36.553988 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b49b7c4d-0e91-46f8-b01b-ee32e881787b-kube-proxy\") pod \"kube-proxy-mxljq\" (UID: \"b49b7c4d-0e91-46f8-b01b-ee32e881787b\") " pod="kube-system/kube-proxy-mxljq" Jul 7 00:10:36.554699 kubelet[2707]: I0707 00:10:36.554004 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-cgroup\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.554699 kubelet[2707]: I0707 00:10:36.554017 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b49b7c4d-0e91-46f8-b01b-ee32e881787b-xtables-lock\") pod \"kube-proxy-mxljq\" (UID: \"b49b7c4d-0e91-46f8-b01b-ee32e881787b\") " pod="kube-system/kube-proxy-mxljq" Jul 7 00:10:36.554699 kubelet[2707]: I0707 00:10:36.554029 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-etc-cni-netd\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.554699 kubelet[2707]: I0707 00:10:36.554043 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce567480-1348-435b-8dbd-3c311e0e0c9d-clustermesh-secrets\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555206 kubelet[2707]: I0707 00:10:36.554060 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85r7l\" (UniqueName: \"kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-kube-api-access-85r7l\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555206 kubelet[2707]: I0707 00:10:36.554073 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-run\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555206 kubelet[2707]: I0707 00:10:36.554086 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cni-path\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555206 kubelet[2707]: I0707 00:10:36.554097 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-lib-modules\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555206 kubelet[2707]: I0707 00:10:36.554112 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b49b7c4d-0e91-46f8-b01b-ee32e881787b-lib-modules\") pod \"kube-proxy-mxljq\" (UID: \"b49b7c4d-0e91-46f8-b01b-ee32e881787b\") " pod="kube-system/kube-proxy-mxljq" Jul 7 00:10:36.555206 kubelet[2707]: I0707 00:10:36.554124 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-kernel\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555336 kubelet[2707]: I0707 00:10:36.554137 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsvmk\" (UniqueName: \"kubernetes.io/projected/b49b7c4d-0e91-46f8-b01b-ee32e881787b-kube-api-access-jsvmk\") pod \"kube-proxy-mxljq\" (UID: \"b49b7c4d-0e91-46f8-b01b-ee32e881787b\") " pod="kube-system/kube-proxy-mxljq" Jul 7 00:10:36.555336 kubelet[2707]: I0707 00:10:36.554153 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-net\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555336 kubelet[2707]: I0707 00:10:36.554169 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-config-path\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555336 kubelet[2707]: I0707 00:10:36.554183 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-hubble-tls\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555336 kubelet[2707]: I0707 00:10:36.554196 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-bpf-maps\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.555336 kubelet[2707]: I0707 00:10:36.554211 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-xtables-lock\") pod \"cilium-926kf\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " pod="kube-system/cilium-926kf" Jul 7 00:10:36.831022 kubelet[2707]: E0707 00:10:36.830915 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:36.831136 kubelet[2707]: E0707 00:10:36.831029 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:36.831714 containerd[1548]: time="2025-07-07T00:10:36.831518157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-926kf,Uid:ce567480-1348-435b-8dbd-3c311e0e0c9d,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:36.832032 containerd[1548]: time="2025-07-07T00:10:36.831518147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxljq,Uid:b49b7c4d-0e91-46f8-b01b-ee32e881787b,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:36.868197 kubelet[2707]: E0707 00:10:36.868147 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:36.875534 systemd[1]: Created slice kubepods-besteffort-pod4f61aa24_e0bc_47d9_be07_97025b447499.slice - libcontainer container kubepods-besteffort-pod4f61aa24_e0bc_47d9_be07_97025b447499.slice. Jul 7 00:10:36.906721 containerd[1548]: time="2025-07-07T00:10:36.906663253Z" level=info msg="connecting to shim 43c494bd696d6e7c83adc23622dfe3ed7f9907933fc03eded4c4e9dce8e73ba2" address="unix:///run/containerd/s/dcc2e9c2e567da5119c49c31a1c8578bb35cdc546948a817c1527880e193900d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:36.925320 containerd[1548]: time="2025-07-07T00:10:36.925217205Z" level=info msg="connecting to shim 826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e" address="unix:///run/containerd/s/a91ba85b3999e75a0714d3f358c6f6a3ded2779285866febf460285eb73472b2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:36.948807 systemd[1]: Started cri-containerd-43c494bd696d6e7c83adc23622dfe3ed7f9907933fc03eded4c4e9dce8e73ba2.scope - libcontainer container 43c494bd696d6e7c83adc23622dfe3ed7f9907933fc03eded4c4e9dce8e73ba2. Jul 7 00:10:36.952353 systemd[1]: Started cri-containerd-826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e.scope - libcontainer container 826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e. Jul 7 00:10:36.956321 kubelet[2707]: I0707 00:10:36.956295 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwg7\" (UniqueName: \"kubernetes.io/projected/4f61aa24-e0bc-47d9-be07-97025b447499-kube-api-access-7xwg7\") pod \"cilium-operator-6c4d7847fc-f2rbt\" (UID: \"4f61aa24-e0bc-47d9-be07-97025b447499\") " pod="kube-system/cilium-operator-6c4d7847fc-f2rbt" Jul 7 00:10:36.956537 kubelet[2707]: I0707 00:10:36.956505 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f61aa24-e0bc-47d9-be07-97025b447499-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f2rbt\" (UID: \"4f61aa24-e0bc-47d9-be07-97025b447499\") " pod="kube-system/cilium-operator-6c4d7847fc-f2rbt" Jul 7 00:10:36.991899 containerd[1548]: time="2025-07-07T00:10:36.991849964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxljq,Uid:b49b7c4d-0e91-46f8-b01b-ee32e881787b,Namespace:kube-system,Attempt:0,} returns sandbox id \"43c494bd696d6e7c83adc23622dfe3ed7f9907933fc03eded4c4e9dce8e73ba2\"" Jul 7 00:10:36.992655 kubelet[2707]: E0707 00:10:36.992631 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:36.995398 containerd[1548]: time="2025-07-07T00:10:36.995307371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-926kf,Uid:ce567480-1348-435b-8dbd-3c311e0e0c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\"" Jul 7 00:10:36.995503 containerd[1548]: time="2025-07-07T00:10:36.995433827Z" level=info msg="CreateContainer within sandbox \"43c494bd696d6e7c83adc23622dfe3ed7f9907933fc03eded4c4e9dce8e73ba2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:10:36.996020 kubelet[2707]: E0707 00:10:36.996004 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:36.998223 containerd[1548]: time="2025-07-07T00:10:36.998012558Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:10:37.009705 containerd[1548]: time="2025-07-07T00:10:37.009663303Z" level=info msg="Container 7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:37.018373 containerd[1548]: time="2025-07-07T00:10:37.018325226Z" level=info msg="CreateContainer within sandbox \"43c494bd696d6e7c83adc23622dfe3ed7f9907933fc03eded4c4e9dce8e73ba2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38\"" Jul 7 00:10:37.019014 containerd[1548]: time="2025-07-07T00:10:37.018755822Z" level=info msg="StartContainer for \"7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38\"" Jul 7 00:10:37.020074 containerd[1548]: time="2025-07-07T00:10:37.020053281Z" level=info msg="connecting to shim 7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38" address="unix:///run/containerd/s/dcc2e9c2e567da5119c49c31a1c8578bb35cdc546948a817c1527880e193900d" protocol=ttrpc version=3 Jul 7 00:10:37.040800 systemd[1]: Started cri-containerd-7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38.scope - libcontainer container 7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38. Jul 7 00:10:37.073693 kubelet[2707]: E0707 00:10:37.072273 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:37.084276 containerd[1548]: time="2025-07-07T00:10:37.083755887Z" level=info msg="StartContainer for \"7bed523311ad02da5466ddffeb1e5f89eb14db2bf9b1e35f58fe1714a15c2a38\" returns successfully" Jul 7 00:10:37.183267 kubelet[2707]: E0707 00:10:37.183221 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:37.183695 containerd[1548]: time="2025-07-07T00:10:37.183630113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f2rbt,Uid:4f61aa24-e0bc-47d9-be07-97025b447499,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:37.201989 containerd[1548]: time="2025-07-07T00:10:37.201868290Z" level=info msg="connecting to shim 20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e" address="unix:///run/containerd/s/2bacc1aeb23830c789f24ba53a6e06f39ea570cce01cba3e75d874fae54aaa96" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:37.227396 systemd[1]: Started cri-containerd-20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e.scope - libcontainer container 20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e. Jul 7 00:10:37.273031 containerd[1548]: time="2025-07-07T00:10:37.272986995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f2rbt,Uid:4f61aa24-e0bc-47d9-be07-97025b447499,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\"" Jul 7 00:10:37.273813 kubelet[2707]: E0707 00:10:37.273794 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:37.350870 kubelet[2707]: E0707 00:10:37.349983 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:37.350870 kubelet[2707]: E0707 00:10:37.350366 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:37.350870 kubelet[2707]: E0707 00:10:37.350816 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:37.367580 kubelet[2707]: I0707 00:10:37.367275 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxljq" podStartSLOduration=1.36725512 podStartE2EDuration="1.36725512s" podCreationTimestamp="2025-07-07 00:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:37.367171002 +0000 UTC m=+8.133062656" watchObservedRunningTime="2025-07-07 00:10:37.36725512 +0000 UTC m=+8.133146774" Jul 7 00:10:37.901463 kubelet[2707]: E0707 00:10:37.901378 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:38.351252 kubelet[2707]: E0707 00:10:38.351224 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:38.351769 kubelet[2707]: E0707 00:10:38.351596 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:41.120002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134417698.mount: Deactivated successfully. Jul 7 00:10:45.765859 containerd[1548]: time="2025-07-07T00:10:45.765804448Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:45.766632 containerd[1548]: time="2025-07-07T00:10:45.766605503Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:10:45.767803 containerd[1548]: time="2025-07-07T00:10:45.767755893Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:45.769202 containerd[1548]: time="2025-07-07T00:10:45.769166727Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.771126237s" Jul 7 00:10:45.769202 containerd[1548]: time="2025-07-07T00:10:45.769196511Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:10:45.770369 containerd[1548]: time="2025-07-07T00:10:45.770343493Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:10:45.771494 containerd[1548]: time="2025-07-07T00:10:45.771466495Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:10:45.780808 containerd[1548]: time="2025-07-07T00:10:45.780756087Z" level=info msg="Container 88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:45.785106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057383160.mount: Deactivated successfully. Jul 7 00:10:45.787219 containerd[1548]: time="2025-07-07T00:10:45.787183785Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\"" Jul 7 00:10:45.787746 containerd[1548]: time="2025-07-07T00:10:45.787704064Z" level=info msg="StartContainer for \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\"" Jul 7 00:10:45.788805 containerd[1548]: time="2025-07-07T00:10:45.788770885Z" level=info msg="connecting to shim 88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2" address="unix:///run/containerd/s/a91ba85b3999e75a0714d3f358c6f6a3ded2779285866febf460285eb73472b2" protocol=ttrpc version=3 Jul 7 00:10:45.812802 systemd[1]: Started cri-containerd-88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2.scope - libcontainer container 88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2. Jul 7 00:10:45.844635 containerd[1548]: time="2025-07-07T00:10:45.844596397Z" level=info msg="StartContainer for \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" returns successfully" Jul 7 00:10:45.854512 systemd[1]: cri-containerd-88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2.scope: Deactivated successfully. Jul 7 00:10:45.857583 containerd[1548]: time="2025-07-07T00:10:45.857527844Z" level=info msg="received exit event container_id:\"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" id:\"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" pid:3148 exited_at:{seconds:1751847045 nanos:857138415}" Jul 7 00:10:45.857583 containerd[1548]: time="2025-07-07T00:10:45.857585577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" id:\"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" pid:3148 exited_at:{seconds:1751847045 nanos:857138415}" Jul 7 00:10:45.877079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2-rootfs.mount: Deactivated successfully. Jul 7 00:10:46.371933 kubelet[2707]: E0707 00:10:46.371893 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:47.301298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267163593.mount: Deactivated successfully. Jul 7 00:10:47.374784 kubelet[2707]: E0707 00:10:47.374743 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:47.377694 containerd[1548]: time="2025-07-07T00:10:47.377628283Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:10:47.387685 containerd[1548]: time="2025-07-07T00:10:47.387463809Z" level=info msg="Container 5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:47.394881 containerd[1548]: time="2025-07-07T00:10:47.394835427Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\"" Jul 7 00:10:47.395601 containerd[1548]: time="2025-07-07T00:10:47.395370626Z" level=info msg="StartContainer for \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\"" Jul 7 00:10:47.396339 containerd[1548]: time="2025-07-07T00:10:47.396308464Z" level=info msg="connecting to shim 5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa" address="unix:///run/containerd/s/a91ba85b3999e75a0714d3f358c6f6a3ded2779285866febf460285eb73472b2" protocol=ttrpc version=3 Jul 7 00:10:47.422938 systemd[1]: Started cri-containerd-5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa.scope - libcontainer container 5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa. Jul 7 00:10:47.474864 containerd[1548]: time="2025-07-07T00:10:47.474809787Z" level=info msg="StartContainer for \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" returns successfully" Jul 7 00:10:47.492721 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:10:47.493343 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:10:47.493718 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:10:47.495227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:10:47.496821 systemd[1]: cri-containerd-5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa.scope: Deactivated successfully. Jul 7 00:10:47.497010 containerd[1548]: time="2025-07-07T00:10:47.496951119Z" level=info msg="received exit event container_id:\"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" id:\"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" pid:3200 exited_at:{seconds:1751847047 nanos:496565215}" Jul 7 00:10:47.497340 containerd[1548]: time="2025-07-07T00:10:47.497143514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" id:\"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" pid:3200 exited_at:{seconds:1751847047 nanos:496565215}" Jul 7 00:10:47.530157 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:10:48.299179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa-rootfs.mount: Deactivated successfully. Jul 7 00:10:48.378737 kubelet[2707]: E0707 00:10:48.378700 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:48.381685 containerd[1548]: time="2025-07-07T00:10:48.381611710Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:10:48.451588 containerd[1548]: time="2025-07-07T00:10:48.451539036Z" level=info msg="Container 20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:48.462577 containerd[1548]: time="2025-07-07T00:10:48.462531751Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\"" Jul 7 00:10:48.463453 containerd[1548]: time="2025-07-07T00:10:48.463406408Z" level=info msg="StartContainer for \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\"" Jul 7 00:10:48.465025 containerd[1548]: time="2025-07-07T00:10:48.464988662Z" level=info msg="connecting to shim 20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f" address="unix:///run/containerd/s/a91ba85b3999e75a0714d3f358c6f6a3ded2779285866febf460285eb73472b2" protocol=ttrpc version=3 Jul 7 00:10:48.496877 systemd[1]: Started cri-containerd-20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f.scope - libcontainer container 20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f. Jul 7 00:10:48.508657 containerd[1548]: time="2025-07-07T00:10:48.508618549Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:48.509341 containerd[1548]: time="2025-07-07T00:10:48.509316003Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:10:48.510522 containerd[1548]: time="2025-07-07T00:10:48.510502095Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:48.511654 containerd[1548]: time="2025-07-07T00:10:48.511630665Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.741256255s" Jul 7 00:10:48.511715 containerd[1548]: time="2025-07-07T00:10:48.511658713Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:10:48.514144 containerd[1548]: time="2025-07-07T00:10:48.514110553Z" level=info msg="CreateContainer within sandbox \"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:10:48.523692 containerd[1548]: time="2025-07-07T00:10:48.523508690Z" level=info msg="Container 56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:48.531928 containerd[1548]: time="2025-07-07T00:10:48.531881464Z" level=info msg="CreateContainer within sandbox \"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\"" Jul 7 00:10:48.533030 containerd[1548]: time="2025-07-07T00:10:48.532991074Z" level=info msg="StartContainer for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\"" Jul 7 00:10:48.533924 containerd[1548]: time="2025-07-07T00:10:48.533779490Z" level=info msg="connecting to shim 56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74" address="unix:///run/containerd/s/2bacc1aeb23830c789f24ba53a6e06f39ea570cce01cba3e75d874fae54aaa96" protocol=ttrpc version=3 Jul 7 00:10:48.547886 systemd[1]: cri-containerd-20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f.scope: Deactivated successfully. Jul 7 00:10:48.549101 containerd[1548]: time="2025-07-07T00:10:48.549056003Z" level=info msg="StartContainer for \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" returns successfully" Jul 7 00:10:48.550195 containerd[1548]: time="2025-07-07T00:10:48.550010517Z" level=info msg="received exit event container_id:\"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" id:\"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" pid:3256 exited_at:{seconds:1751847048 nanos:549550722}" Jul 7 00:10:48.550195 containerd[1548]: time="2025-07-07T00:10:48.550097320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" id:\"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" pid:3256 exited_at:{seconds:1751847048 nanos:549550722}" Jul 7 00:10:48.561809 systemd[1]: Started cri-containerd-56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74.scope - libcontainer container 56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74. Jul 7 00:10:48.715612 containerd[1548]: time="2025-07-07T00:10:48.715538986Z" level=info msg="StartContainer for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" returns successfully" Jul 7 00:10:49.304908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f-rootfs.mount: Deactivated successfully. Jul 7 00:10:49.382964 kubelet[2707]: E0707 00:10:49.382912 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:49.388093 kubelet[2707]: E0707 00:10:49.388027 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:49.390273 containerd[1548]: time="2025-07-07T00:10:49.390233691Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:10:49.409722 containerd[1548]: time="2025-07-07T00:10:49.409019462Z" level=info msg="Container 463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:49.416594 kubelet[2707]: I0707 00:10:49.416493 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f2rbt" podStartSLOduration=2.178497337 podStartE2EDuration="13.416458621s" podCreationTimestamp="2025-07-07 00:10:36 +0000 UTC" firstStartedPulling="2025-07-07 00:10:37.274196306 +0000 UTC m=+8.040087961" lastFinishedPulling="2025-07-07 00:10:48.512157591 +0000 UTC m=+19.278049245" observedRunningTime="2025-07-07 00:10:49.394346233 +0000 UTC m=+20.160237907" watchObservedRunningTime="2025-07-07 00:10:49.416458621 +0000 UTC m=+20.182350275" Jul 7 00:10:49.418648 containerd[1548]: time="2025-07-07T00:10:49.418550487Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\"" Jul 7 00:10:49.419132 containerd[1548]: time="2025-07-07T00:10:49.419104946Z" level=info msg="StartContainer for \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\"" Jul 7 00:10:49.421089 containerd[1548]: time="2025-07-07T00:10:49.421057530Z" level=info msg="connecting to shim 463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00" address="unix:///run/containerd/s/a91ba85b3999e75a0714d3f358c6f6a3ded2779285866febf460285eb73472b2" protocol=ttrpc version=3 Jul 7 00:10:49.447872 systemd[1]: Started cri-containerd-463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00.scope - libcontainer container 463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00. Jul 7 00:10:49.484075 systemd[1]: cri-containerd-463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00.scope: Deactivated successfully. Jul 7 00:10:49.485099 containerd[1548]: time="2025-07-07T00:10:49.485054801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" id:\"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" pid:3330 exited_at:{seconds:1751847049 nanos:484517888}" Jul 7 00:10:49.488331 containerd[1548]: time="2025-07-07T00:10:49.488292823Z" level=info msg="received exit event container_id:\"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" id:\"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" pid:3330 exited_at:{seconds:1751847049 nanos:484517888}" Jul 7 00:10:49.490110 containerd[1548]: time="2025-07-07T00:10:49.490066112Z" level=info msg="StartContainer for \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" returns successfully" Jul 7 00:10:49.512618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00-rootfs.mount: Deactivated successfully. Jul 7 00:10:50.392861 kubelet[2707]: E0707 00:10:50.392820 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:50.393280 kubelet[2707]: E0707 00:10:50.392906 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:50.395969 containerd[1548]: time="2025-07-07T00:10:50.395936698Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:10:50.409097 containerd[1548]: time="2025-07-07T00:10:50.408729952Z" level=info msg="Container 85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:50.415897 containerd[1548]: time="2025-07-07T00:10:50.415854406Z" level=info msg="CreateContainer within sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\"" Jul 7 00:10:50.416381 containerd[1548]: time="2025-07-07T00:10:50.416291416Z" level=info msg="StartContainer for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\"" Jul 7 00:10:50.417744 containerd[1548]: time="2025-07-07T00:10:50.417722989Z" level=info msg="connecting to shim 85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07" address="unix:///run/containerd/s/a91ba85b3999e75a0714d3f358c6f6a3ded2779285866febf460285eb73472b2" protocol=ttrpc version=3 Jul 7 00:10:50.448813 systemd[1]: Started cri-containerd-85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07.scope - libcontainer container 85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07. Jul 7 00:10:50.486113 containerd[1548]: time="2025-07-07T00:10:50.485970269Z" level=info msg="StartContainer for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" returns successfully" Jul 7 00:10:50.556980 containerd[1548]: time="2025-07-07T00:10:50.556927915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" id:\"4036195089afa99a3d982f23c8c34333684436d84a580382d726dea2a564a9bf\" pid:3399 exited_at:{seconds:1751847050 nanos:556474210}" Jul 7 00:10:50.651122 kubelet[2707]: I0707 00:10:50.650993 2707 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:10:50.687934 systemd[1]: Created slice kubepods-burstable-pod7451dfb0_bbd0_4a79_9e03_46049955e3e2.slice - libcontainer container kubepods-burstable-pod7451dfb0_bbd0_4a79_9e03_46049955e3e2.slice. Jul 7 00:10:50.696763 systemd[1]: Created slice kubepods-burstable-pod36a99f85_88ba_498e_87d2_1ba3e1bc568a.slice - libcontainer container kubepods-burstable-pod36a99f85_88ba_498e_87d2_1ba3e1bc568a.slice. Jul 7 00:10:50.754711 kubelet[2707]: I0707 00:10:50.754607 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzd82\" (UniqueName: \"kubernetes.io/projected/7451dfb0-bbd0-4a79-9e03-46049955e3e2-kube-api-access-jzd82\") pod \"coredns-668d6bf9bc-5wk5w\" (UID: \"7451dfb0-bbd0-4a79-9e03-46049955e3e2\") " pod="kube-system/coredns-668d6bf9bc-5wk5w" Jul 7 00:10:50.754711 kubelet[2707]: I0707 00:10:50.754704 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sv4f\" (UniqueName: \"kubernetes.io/projected/36a99f85-88ba-498e-87d2-1ba3e1bc568a-kube-api-access-2sv4f\") pod \"coredns-668d6bf9bc-f4gwt\" (UID: \"36a99f85-88ba-498e-87d2-1ba3e1bc568a\") " pod="kube-system/coredns-668d6bf9bc-f4gwt" Jul 7 00:10:50.754882 kubelet[2707]: I0707 00:10:50.754736 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7451dfb0-bbd0-4a79-9e03-46049955e3e2-config-volume\") pod \"coredns-668d6bf9bc-5wk5w\" (UID: \"7451dfb0-bbd0-4a79-9e03-46049955e3e2\") " pod="kube-system/coredns-668d6bf9bc-5wk5w" Jul 7 00:10:50.754882 kubelet[2707]: I0707 00:10:50.754759 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a99f85-88ba-498e-87d2-1ba3e1bc568a-config-volume\") pod \"coredns-668d6bf9bc-f4gwt\" (UID: \"36a99f85-88ba-498e-87d2-1ba3e1bc568a\") " pod="kube-system/coredns-668d6bf9bc-f4gwt" Jul 7 00:10:50.993130 kubelet[2707]: E0707 00:10:50.993077 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:50.993933 containerd[1548]: time="2025-07-07T00:10:50.993881245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5wk5w,Uid:7451dfb0-bbd0-4a79-9e03-46049955e3e2,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:51.001243 kubelet[2707]: E0707 00:10:51.001194 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:51.003008 containerd[1548]: time="2025-07-07T00:10:51.002391594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f4gwt,Uid:36a99f85-88ba-498e-87d2-1ba3e1bc568a,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:51.401477 kubelet[2707]: E0707 00:10:51.401057 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:51.415317 kubelet[2707]: I0707 00:10:51.415251 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-926kf" podStartSLOduration=6.642627561 podStartE2EDuration="15.415232496s" podCreationTimestamp="2025-07-07 00:10:36 +0000 UTC" firstStartedPulling="2025-07-07 00:10:36.997509851 +0000 UTC m=+7.763401495" lastFinishedPulling="2025-07-07 00:10:45.770114766 +0000 UTC m=+16.536006430" observedRunningTime="2025-07-07 00:10:51.413856639 +0000 UTC m=+22.179748293" watchObservedRunningTime="2025-07-07 00:10:51.415232496 +0000 UTC m=+22.181124150" Jul 7 00:10:52.403553 kubelet[2707]: E0707 00:10:52.403501 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:52.732369 systemd-networkd[1489]: cilium_host: Link UP Jul 7 00:10:52.732649 systemd-networkd[1489]: cilium_net: Link UP Jul 7 00:10:52.733118 systemd-networkd[1489]: cilium_net: Gained carrier Jul 7 00:10:52.733409 systemd-networkd[1489]: cilium_host: Gained carrier Jul 7 00:10:52.819860 systemd-networkd[1489]: cilium_host: Gained IPv6LL Jul 7 00:10:52.841403 systemd-networkd[1489]: cilium_vxlan: Link UP Jul 7 00:10:52.841414 systemd-networkd[1489]: cilium_vxlan: Gained carrier Jul 7 00:10:53.052714 kernel: NET: Registered PF_ALG protocol family Jul 7 00:10:53.405124 kubelet[2707]: E0707 00:10:53.405082 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:53.547853 systemd-networkd[1489]: cilium_net: Gained IPv6LL Jul 7 00:10:53.705300 systemd-networkd[1489]: lxc_health: Link UP Jul 7 00:10:53.707818 systemd-networkd[1489]: lxc_health: Gained carrier Jul 7 00:10:54.062074 systemd-networkd[1489]: lxc212f9e4c27e7: Link UP Jul 7 00:10:54.062736 kernel: eth0: renamed from tmp8b28a Jul 7 00:10:54.062481 systemd-networkd[1489]: lxc7491cbffc849: Link UP Jul 7 00:10:54.065697 kernel: eth0: renamed from tmp36437 Jul 7 00:10:54.070728 systemd-networkd[1489]: cilium_vxlan: Gained IPv6LL Jul 7 00:10:54.070997 systemd-networkd[1489]: lxc7491cbffc849: Gained carrier Jul 7 00:10:54.071172 systemd-networkd[1489]: lxc212f9e4c27e7: Gained carrier Jul 7 00:10:54.762869 systemd-networkd[1489]: lxc_health: Gained IPv6LL Jul 7 00:10:54.833716 kubelet[2707]: E0707 00:10:54.833641 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:55.409387 kubelet[2707]: E0707 00:10:55.409344 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:56.044789 systemd-networkd[1489]: lxc7491cbffc849: Gained IPv6LL Jul 7 00:10:56.106885 systemd-networkd[1489]: lxc212f9e4c27e7: Gained IPv6LL Jul 7 00:10:56.411574 kubelet[2707]: E0707 00:10:56.411448 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:57.419324 containerd[1548]: time="2025-07-07T00:10:57.419269023Z" level=info msg="connecting to shim 36437bcd2a00398f8af9defee575ed777ca774818fc0b7b70d30cc90574136ee" address="unix:///run/containerd/s/1ddff74863821f59053f8fe5ddb3c715948de5b2baa03eb41f3e445dc35437bd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:57.433015 containerd[1548]: time="2025-07-07T00:10:57.432917961Z" level=info msg="connecting to shim 8b28a71e6b0354955d04376a3bbbc9bebf73ca54e9b057d316fa893471d69de1" address="unix:///run/containerd/s/6565d6f2f686553613c5b3f08dd2028aaa2aac9666110bee245ad719a735106e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:10:57.452791 systemd[1]: Started cri-containerd-36437bcd2a00398f8af9defee575ed777ca774818fc0b7b70d30cc90574136ee.scope - libcontainer container 36437bcd2a00398f8af9defee575ed777ca774818fc0b7b70d30cc90574136ee. Jul 7 00:10:57.455908 systemd[1]: Started cri-containerd-8b28a71e6b0354955d04376a3bbbc9bebf73ca54e9b057d316fa893471d69de1.scope - libcontainer container 8b28a71e6b0354955d04376a3bbbc9bebf73ca54e9b057d316fa893471d69de1. Jul 7 00:10:57.468598 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:10:57.475917 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:10:57.502538 containerd[1548]: time="2025-07-07T00:10:57.502475661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5wk5w,Uid:7451dfb0-bbd0-4a79-9e03-46049955e3e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"36437bcd2a00398f8af9defee575ed777ca774818fc0b7b70d30cc90574136ee\"" Jul 7 00:10:57.503722 kubelet[2707]: E0707 00:10:57.503529 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:57.506308 containerd[1548]: time="2025-07-07T00:10:57.506261633Z" level=info msg="CreateContainer within sandbox \"36437bcd2a00398f8af9defee575ed777ca774818fc0b7b70d30cc90574136ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:10:57.527504 containerd[1548]: time="2025-07-07T00:10:57.527447000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f4gwt,Uid:36a99f85-88ba-498e-87d2-1ba3e1bc568a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b28a71e6b0354955d04376a3bbbc9bebf73ca54e9b057d316fa893471d69de1\"" Jul 7 00:10:57.528166 kubelet[2707]: E0707 00:10:57.528143 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:57.530409 containerd[1548]: time="2025-07-07T00:10:57.530354409Z" level=info msg="CreateContainer within sandbox \"8b28a71e6b0354955d04376a3bbbc9bebf73ca54e9b057d316fa893471d69de1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:10:57.538427 containerd[1548]: time="2025-07-07T00:10:57.538376872Z" level=info msg="Container 641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:57.541129 containerd[1548]: time="2025-07-07T00:10:57.541105346Z" level=info msg="Container 5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:10:57.545300 containerd[1548]: time="2025-07-07T00:10:57.545259267Z" level=info msg="CreateContainer within sandbox \"36437bcd2a00398f8af9defee575ed777ca774818fc0b7b70d30cc90574136ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516\"" Jul 7 00:10:57.545808 containerd[1548]: time="2025-07-07T00:10:57.545780120Z" level=info msg="StartContainer for \"641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516\"" Jul 7 00:10:57.546600 containerd[1548]: time="2025-07-07T00:10:57.546568749Z" level=info msg="connecting to shim 641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516" address="unix:///run/containerd/s/1ddff74863821f59053f8fe5ddb3c715948de5b2baa03eb41f3e445dc35437bd" protocol=ttrpc version=3 Jul 7 00:10:57.550829 containerd[1548]: time="2025-07-07T00:10:57.550790559Z" level=info msg="CreateContainer within sandbox \"8b28a71e6b0354955d04376a3bbbc9bebf73ca54e9b057d316fa893471d69de1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52\"" Jul 7 00:10:57.551488 containerd[1548]: time="2025-07-07T00:10:57.551346092Z" level=info msg="StartContainer for \"5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52\"" Jul 7 00:10:57.552264 containerd[1548]: time="2025-07-07T00:10:57.552232139Z" level=info msg="connecting to shim 5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52" address="unix:///run/containerd/s/6565d6f2f686553613c5b3f08dd2028aaa2aac9666110bee245ad719a735106e" protocol=ttrpc version=3 Jul 7 00:10:57.565806 systemd[1]: Started cri-containerd-641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516.scope - libcontainer container 641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516. Jul 7 00:10:57.569810 systemd[1]: Started cri-containerd-5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52.scope - libcontainer container 5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52. Jul 7 00:10:57.602273 containerd[1548]: time="2025-07-07T00:10:57.602111968Z" level=info msg="StartContainer for \"641f986663cc9f8bf0e0f6ca4e61a2dd87436ff6baef809aca97cf87fd6b0516\" returns successfully" Jul 7 00:10:57.607367 containerd[1548]: time="2025-07-07T00:10:57.607309708Z" level=info msg="StartContainer for \"5c0fcc8ef40199108152f2f5b472d129047736c00344c83dfe237c4ca2f27e52\" returns successfully" Jul 7 00:10:58.418663 kubelet[2707]: E0707 00:10:58.418620 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:58.424789 kubelet[2707]: E0707 00:10:58.424570 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:58.481231 kubelet[2707]: I0707 00:10:58.481122 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5wk5w" podStartSLOduration=22.481101366 podStartE2EDuration="22.481101366s" podCreationTimestamp="2025-07-07 00:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:58.480778339 +0000 UTC m=+29.246669993" watchObservedRunningTime="2025-07-07 00:10:58.481101366 +0000 UTC m=+29.246993010" Jul 7 00:10:58.504175 kubelet[2707]: I0707 00:10:58.503640 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f4gwt" podStartSLOduration=22.503618671 podStartE2EDuration="22.503618671s" podCreationTimestamp="2025-07-07 00:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:58.503215059 +0000 UTC m=+29.269106713" watchObservedRunningTime="2025-07-07 00:10:58.503618671 +0000 UTC m=+29.269510325" Jul 7 00:10:58.907543 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:59044.service - OpenSSH per-connection server daemon (10.0.0.1:59044). Jul 7 00:10:58.964488 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 59044 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:10:58.965991 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:58.970523 systemd-logind[1541]: New session 10 of user core. Jul 7 00:10:58.981823 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:10:59.110938 sshd[4053]: Connection closed by 10.0.0.1 port 59044 Jul 7 00:10:59.111242 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:59.116062 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:59044.service: Deactivated successfully. Jul 7 00:10:59.118189 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:10:59.119109 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:10:59.120373 systemd-logind[1541]: Removed session 10. Jul 7 00:10:59.425213 kubelet[2707]: E0707 00:10:59.425176 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:10:59.425404 kubelet[2707]: E0707 00:10:59.425327 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:00.427020 kubelet[2707]: E0707 00:11:00.426987 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:00.427490 kubelet[2707]: E0707 00:11:00.427104 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:04.138455 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:59046.service - OpenSSH per-connection server daemon (10.0.0.1:59046). Jul 7 00:11:04.192298 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 59046 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:04.193862 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:04.198904 systemd-logind[1541]: New session 11 of user core. Jul 7 00:11:04.208853 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:11:04.324695 sshd[4072]: Connection closed by 10.0.0.1 port 59046 Jul 7 00:11:04.325018 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:04.329599 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:59046.service: Deactivated successfully. Jul 7 00:11:04.331772 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:11:04.332526 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:11:04.334150 systemd-logind[1541]: Removed session 11. Jul 7 00:11:09.340261 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:48500.service - OpenSSH per-connection server daemon (10.0.0.1:48500). Jul 7 00:11:09.394397 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 48500 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:09.396072 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:09.400522 systemd-logind[1541]: New session 12 of user core. Jul 7 00:11:09.409803 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:11:09.536203 sshd[4091]: Connection closed by 10.0.0.1 port 48500 Jul 7 00:11:09.535944 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:09.541148 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:48500.service: Deactivated successfully. Jul 7 00:11:09.543727 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:11:09.544829 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:11:09.546466 systemd-logind[1541]: Removed session 12. Jul 7 00:11:14.553008 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:48508.service - OpenSSH per-connection server daemon (10.0.0.1:48508). Jul 7 00:11:14.601223 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 48508 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:14.602596 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:14.607210 systemd-logind[1541]: New session 13 of user core. Jul 7 00:11:14.619824 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:11:14.735458 sshd[4107]: Connection closed by 10.0.0.1 port 48508 Jul 7 00:11:14.735845 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:14.750408 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:48508.service: Deactivated successfully. Jul 7 00:11:14.752208 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:11:14.753128 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:11:14.756185 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:48512.service - OpenSSH per-connection server daemon (10.0.0.1:48512). Jul 7 00:11:14.756843 systemd-logind[1541]: Removed session 13. Jul 7 00:11:14.809640 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 48512 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:14.811152 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:14.815495 systemd-logind[1541]: New session 14 of user core. Jul 7 00:11:14.827813 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:11:14.977156 sshd[4126]: Connection closed by 10.0.0.1 port 48512 Jul 7 00:11:14.979198 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:14.990000 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:48512.service: Deactivated successfully. Jul 7 00:11:14.993286 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:11:14.994398 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:11:14.999010 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:48520.service - OpenSSH per-connection server daemon (10.0.0.1:48520). Jul 7 00:11:15.000973 systemd-logind[1541]: Removed session 14. Jul 7 00:11:15.052910 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 48520 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:15.054389 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:15.059104 systemd-logind[1541]: New session 15 of user core. Jul 7 00:11:15.074760 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:11:15.241723 sshd[4140]: Connection closed by 10.0.0.1 port 48520 Jul 7 00:11:15.242038 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:15.246466 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:48520.service: Deactivated successfully. Jul 7 00:11:15.248430 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:11:15.249288 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:11:15.250523 systemd-logind[1541]: Removed session 15. Jul 7 00:11:20.259634 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:43372.service - OpenSSH per-connection server daemon (10.0.0.1:43372). Jul 7 00:11:20.298132 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 43372 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:20.299802 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:20.304090 systemd-logind[1541]: New session 16 of user core. Jul 7 00:11:20.315816 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:11:20.505559 sshd[4156]: Connection closed by 10.0.0.1 port 43372 Jul 7 00:11:20.505943 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:20.510983 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:43372.service: Deactivated successfully. Jul 7 00:11:20.513353 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:11:20.514174 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:11:20.515520 systemd-logind[1541]: Removed session 16. Jul 7 00:11:25.531951 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:43378.service - OpenSSH per-connection server daemon (10.0.0.1:43378). Jul 7 00:11:25.592687 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 43378 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:25.594310 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:25.598680 systemd-logind[1541]: New session 17 of user core. Jul 7 00:11:25.609886 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:11:25.730847 sshd[4171]: Connection closed by 10.0.0.1 port 43378 Jul 7 00:11:25.731223 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:25.751891 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:43378.service: Deactivated successfully. Jul 7 00:11:25.754329 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:11:25.755801 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:11:25.759501 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:43380.service - OpenSSH per-connection server daemon (10.0.0.1:43380). Jul 7 00:11:25.760393 systemd-logind[1541]: Removed session 17. Jul 7 00:11:25.806967 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 43380 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:25.808492 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:25.813472 systemd-logind[1541]: New session 18 of user core. Jul 7 00:11:25.826858 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:11:26.089057 sshd[4187]: Connection closed by 10.0.0.1 port 43380 Jul 7 00:11:26.090663 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:26.100276 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:43380.service: Deactivated successfully. Jul 7 00:11:26.102056 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:11:26.103086 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:11:26.106265 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:46742.service - OpenSSH per-connection server daemon (10.0.0.1:46742). Jul 7 00:11:26.107242 systemd-logind[1541]: Removed session 18. Jul 7 00:11:26.157492 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 46742 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:26.159369 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:26.164622 systemd-logind[1541]: New session 19 of user core. Jul 7 00:11:26.179890 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:11:26.967967 sshd[4201]: Connection closed by 10.0.0.1 port 46742 Jul 7 00:11:26.968385 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:26.979571 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:46742.service: Deactivated successfully. Jul 7 00:11:26.982120 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:11:26.983291 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:11:26.988136 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:46748.service - OpenSSH per-connection server daemon (10.0.0.1:46748). Jul 7 00:11:26.988809 systemd-logind[1541]: Removed session 19. Jul 7 00:11:27.034829 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 46748 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:27.036230 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:27.040713 systemd-logind[1541]: New session 20 of user core. Jul 7 00:11:27.050910 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:11:27.264164 sshd[4223]: Connection closed by 10.0.0.1 port 46748 Jul 7 00:11:27.266070 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:27.276532 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:46748.service: Deactivated successfully. Jul 7 00:11:27.278653 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:11:27.279511 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:11:27.283001 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:46750.service - OpenSSH per-connection server daemon (10.0.0.1:46750). Jul 7 00:11:27.283653 systemd-logind[1541]: Removed session 20. Jul 7 00:11:27.332992 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 46750 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:27.334530 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:27.338777 systemd-logind[1541]: New session 21 of user core. Jul 7 00:11:27.345821 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:11:27.459506 sshd[4237]: Connection closed by 10.0.0.1 port 46750 Jul 7 00:11:27.459878 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:27.464469 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:46750.service: Deactivated successfully. Jul 7 00:11:27.466814 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:11:27.467539 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:11:27.468888 systemd-logind[1541]: Removed session 21. Jul 7 00:11:32.484201 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:46756.service - OpenSSH per-connection server daemon (10.0.0.1:46756). Jul 7 00:11:32.529120 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 46756 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:32.530384 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:32.535044 systemd-logind[1541]: New session 22 of user core. Jul 7 00:11:32.545153 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:11:32.658103 sshd[4254]: Connection closed by 10.0.0.1 port 46756 Jul 7 00:11:32.658393 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:32.662797 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:46756.service: Deactivated successfully. Jul 7 00:11:32.664563 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:11:32.665311 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:11:32.666565 systemd-logind[1541]: Removed session 22. Jul 7 00:11:37.671416 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Jul 7 00:11:37.717758 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:37.719207 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:37.723483 systemd-logind[1541]: New session 23 of user core. Jul 7 00:11:37.737835 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:11:37.852561 sshd[4274]: Connection closed by 10.0.0.1 port 34128 Jul 7 00:11:37.853028 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:37.857293 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:34128.service: Deactivated successfully. Jul 7 00:11:37.859877 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:11:37.860775 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:11:37.862064 systemd-logind[1541]: Removed session 23. Jul 7 00:11:41.323798 kubelet[2707]: E0707 00:11:41.323720 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:42.868736 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:34136.service - OpenSSH per-connection server daemon (10.0.0.1:34136). Jul 7 00:11:42.921002 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 34136 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:42.922393 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:42.926990 systemd-logind[1541]: New session 24 of user core. Jul 7 00:11:42.936893 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:11:43.053390 sshd[4289]: Connection closed by 10.0.0.1 port 34136 Jul 7 00:11:43.053749 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:43.058456 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:34136.service: Deactivated successfully. Jul 7 00:11:43.060491 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:11:43.061220 systemd-logind[1541]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:11:43.062391 systemd-logind[1541]: Removed session 24. Jul 7 00:11:48.069391 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:40584.service - OpenSSH per-connection server daemon (10.0.0.1:40584). Jul 7 00:11:48.124552 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 40584 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:48.126535 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:48.131304 systemd-logind[1541]: New session 25 of user core. Jul 7 00:11:48.139814 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:11:48.255164 sshd[4305]: Connection closed by 10.0.0.1 port 40584 Jul 7 00:11:48.255650 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:48.266367 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:40584.service: Deactivated successfully. Jul 7 00:11:48.268048 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:11:48.268922 systemd-logind[1541]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:11:48.272072 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:40598.service - OpenSSH per-connection server daemon (10.0.0.1:40598). Jul 7 00:11:48.272836 systemd-logind[1541]: Removed session 25. Jul 7 00:11:48.321188 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 40598 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:48.322562 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:48.326966 systemd-logind[1541]: New session 26 of user core. Jul 7 00:11:48.334809 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:11:49.678475 containerd[1548]: time="2025-07-07T00:11:49.678010806Z" level=info msg="StopContainer for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" with timeout 30 (s)" Jul 7 00:11:49.688644 containerd[1548]: time="2025-07-07T00:11:49.688598294Z" level=info msg="Stop container \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" with signal terminated" Jul 7 00:11:49.701908 systemd[1]: cri-containerd-56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74.scope: Deactivated successfully. Jul 7 00:11:49.703748 containerd[1548]: time="2025-07-07T00:11:49.703717628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" id:\"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" pid:3289 exited_at:{seconds:1751847109 nanos:703209738}" Jul 7 00:11:49.703850 containerd[1548]: time="2025-07-07T00:11:49.703782050Z" level=info msg="received exit event container_id:\"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" id:\"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" pid:3289 exited_at:{seconds:1751847109 nanos:703209738}" Jul 7 00:11:49.720697 containerd[1548]: time="2025-07-07T00:11:49.719929857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" id:\"2974d6493b7631a5ac4d52bee11aefd4ea24efa46f5cc203525e8e2ed67eccb7\" pid:4347 exited_at:{seconds:1751847109 nanos:719572000}" Jul 7 00:11:49.720893 containerd[1548]: time="2025-07-07T00:11:49.720852130Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:11:49.723717 containerd[1548]: time="2025-07-07T00:11:49.723656971Z" level=info msg="StopContainer for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" with timeout 2 (s)" Jul 7 00:11:49.724066 containerd[1548]: time="2025-07-07T00:11:49.724042539Z" level=info msg="Stop container \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" with signal terminated" Jul 7 00:11:49.730155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74-rootfs.mount: Deactivated successfully. Jul 7 00:11:49.732768 systemd-networkd[1489]: lxc_health: Link DOWN Jul 7 00:11:49.732774 systemd-networkd[1489]: lxc_health: Lost carrier Jul 7 00:11:49.743595 containerd[1548]: time="2025-07-07T00:11:49.743547131Z" level=info msg="StopContainer for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" returns successfully" Jul 7 00:11:49.744392 containerd[1548]: time="2025-07-07T00:11:49.744368623Z" level=info msg="StopPodSandbox for \"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\"" Jul 7 00:11:49.748615 containerd[1548]: time="2025-07-07T00:11:49.748583638Z" level=info msg="Container to stop \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:11:49.753215 systemd[1]: cri-containerd-85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07.scope: Deactivated successfully. Jul 7 00:11:49.753616 systemd[1]: cri-containerd-85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07.scope: Consumed 6.602s CPU time, 127.3M memory peak, 216K read from disk, 13.3M written to disk. Jul 7 00:11:49.754962 containerd[1548]: time="2025-07-07T00:11:49.754900267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" id:\"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" pid:3367 exited_at:{seconds:1751847109 nanos:754513014}" Jul 7 00:11:49.755111 containerd[1548]: time="2025-07-07T00:11:49.755038227Z" level=info msg="received exit event container_id:\"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" id:\"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" pid:3367 exited_at:{seconds:1751847109 nanos:754513014}" Jul 7 00:11:49.756805 systemd[1]: cri-containerd-20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e.scope: Deactivated successfully. Jul 7 00:11:49.764538 containerd[1548]: time="2025-07-07T00:11:49.764478207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" id:\"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" pid:3004 exit_status:137 exited_at:{seconds:1751847109 nanos:763325519}" Jul 7 00:11:49.782971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07-rootfs.mount: Deactivated successfully. Jul 7 00:11:49.798546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e-rootfs.mount: Deactivated successfully. Jul 7 00:11:49.988416 containerd[1548]: time="2025-07-07T00:11:49.988367099Z" level=info msg="shim disconnected" id=20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e namespace=k8s.io Jul 7 00:11:49.989334 containerd[1548]: time="2025-07-07T00:11:49.988398369Z" level=warning msg="cleaning up after shim disconnected" id=20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e namespace=k8s.io Jul 7 00:11:49.990012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e-shm.mount: Deactivated successfully. Jul 7 00:11:50.026643 containerd[1548]: time="2025-07-07T00:11:49.988578849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:11:50.026835 containerd[1548]: time="2025-07-07T00:11:49.994468450Z" level=info msg="StopContainer for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" returns successfully" Jul 7 00:11:50.026835 containerd[1548]: time="2025-07-07T00:11:50.001523202Z" level=info msg="received exit event sandbox_id:\"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" exit_status:137 exited_at:{seconds:1751847109 nanos:763325519}" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.023591789Z" level=info msg="TearDown network for sandbox \"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" successfully" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027019013Z" level=info msg="StopPodSandbox for \"20e0111adc3aaa067173bc019ed13795734d217eecc3a0866e62b2f837c72d4e\" returns successfully" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027552291Z" level=info msg="StopPodSandbox for \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\"" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027603398Z" level=info msg="Container to stop \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027614359Z" level=info msg="Container to stop \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027621903Z" level=info msg="Container to stop \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027629577Z" level=info msg="Container to stop \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:11:50.027873 containerd[1548]: time="2025-07-07T00:11:50.027637752Z" level=info msg="Container to stop \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:11:50.035381 systemd[1]: cri-containerd-826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e.scope: Deactivated successfully. Jul 7 00:11:50.035735 systemd[1]: cri-containerd-826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e.scope: Consumed 23ms CPU time, 6.8M memory peak, 4.4M read from disk. Jul 7 00:11:50.038107 containerd[1548]: time="2025-07-07T00:11:50.038053005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" id:\"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" pid:2880 exit_status:137 exited_at:{seconds:1751847110 nanos:37129759}" Jul 7 00:11:50.071292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e-rootfs.mount: Deactivated successfully. Jul 7 00:11:50.077430 containerd[1548]: time="2025-07-07T00:11:50.077329094Z" level=info msg="shim disconnected" id=826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e namespace=k8s.io Jul 7 00:11:50.077430 containerd[1548]: time="2025-07-07T00:11:50.077367867Z" level=warning msg="cleaning up after shim disconnected" id=826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e namespace=k8s.io Jul 7 00:11:50.077430 containerd[1548]: time="2025-07-07T00:11:50.077378657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:11:50.093239 containerd[1548]: time="2025-07-07T00:11:50.093086432Z" level=info msg="received exit event sandbox_id:\"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" exit_status:137 exited_at:{seconds:1751847110 nanos:37129759}" Jul 7 00:11:50.093774 containerd[1548]: time="2025-07-07T00:11:50.093616114Z" level=info msg="TearDown network for sandbox \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" successfully" Jul 7 00:11:50.093774 containerd[1548]: time="2025-07-07T00:11:50.093654035Z" level=info msg="StopPodSandbox for \"826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e\" returns successfully" Jul 7 00:11:50.130795 kubelet[2707]: I0707 00:11:50.130736 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-lib-modules\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.130795 kubelet[2707]: I0707 00:11:50.130795 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f61aa24-e0bc-47d9-be07-97025b447499-cilium-config-path\") pod \"4f61aa24-e0bc-47d9-be07-97025b447499\" (UID: \"4f61aa24-e0bc-47d9-be07-97025b447499\") " Jul 7 00:11:50.131363 kubelet[2707]: I0707 00:11:50.130819 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85r7l\" (UniqueName: \"kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-kube-api-access-85r7l\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131363 kubelet[2707]: I0707 00:11:50.130836 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cni-path\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131363 kubelet[2707]: I0707 00:11:50.130851 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-net\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131363 kubelet[2707]: I0707 00:11:50.130911 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-hostproc\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131363 kubelet[2707]: I0707 00:11:50.130926 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-etc-cni-netd\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131363 kubelet[2707]: I0707 00:11:50.130908 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131558 kubelet[2707]: I0707 00:11:50.130946 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-bpf-maps\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131558 kubelet[2707]: I0707 00:11:50.131050 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131558 kubelet[2707]: I0707 00:11:50.131083 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131558 kubelet[2707]: I0707 00:11:50.131097 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131558 kubelet[2707]: I0707 00:11:50.131121 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131779 kubelet[2707]: I0707 00:11:50.131146 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131779 kubelet[2707]: I0707 00:11:50.130963 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-cgroup\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131779 kubelet[2707]: I0707 00:11:50.131182 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-run\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131779 kubelet[2707]: I0707 00:11:50.131198 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xwg7\" (UniqueName: \"kubernetes.io/projected/4f61aa24-e0bc-47d9-be07-97025b447499-kube-api-access-7xwg7\") pod \"4f61aa24-e0bc-47d9-be07-97025b447499\" (UID: \"4f61aa24-e0bc-47d9-be07-97025b447499\") " Jul 7 00:11:50.131779 kubelet[2707]: I0707 00:11:50.131270 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-xtables-lock\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131779 kubelet[2707]: I0707 00:11:50.131288 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce567480-1348-435b-8dbd-3c311e0e0c9d-clustermesh-secrets\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131988 kubelet[2707]: I0707 00:11:50.131229 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131988 kubelet[2707]: I0707 00:11:50.131244 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.131988 kubelet[2707]: I0707 00:11:50.131740 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-hubble-tls\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131988 kubelet[2707]: I0707 00:11:50.131771 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-config-path\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131988 kubelet[2707]: I0707 00:11:50.131789 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-kernel\") pod \"ce567480-1348-435b-8dbd-3c311e0e0c9d\" (UID: \"ce567480-1348-435b-8dbd-3c311e0e0c9d\") " Jul 7 00:11:50.131988 kubelet[2707]: I0707 00:11:50.131826 2707 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131836 2707 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131845 2707 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131853 2707 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131862 2707 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131869 2707 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131877 2707 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131884 2707 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.132183 kubelet[2707]: I0707 00:11:50.131906 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.132433 kubelet[2707]: I0707 00:11:50.131927 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:11:50.134397 kubelet[2707]: I0707 00:11:50.134043 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f61aa24-e0bc-47d9-be07-97025b447499-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f61aa24-e0bc-47d9-be07-97025b447499" (UID: "4f61aa24-e0bc-47d9-be07-97025b447499"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:11:50.135154 kubelet[2707]: I0707 00:11:50.135120 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f61aa24-e0bc-47d9-be07-97025b447499-kube-api-access-7xwg7" (OuterVolumeSpecName: "kube-api-access-7xwg7") pod "4f61aa24-e0bc-47d9-be07-97025b447499" (UID: "4f61aa24-e0bc-47d9-be07-97025b447499"). InnerVolumeSpecName "kube-api-access-7xwg7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:11:50.135767 kubelet[2707]: I0707 00:11:50.135732 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce567480-1348-435b-8dbd-3c311e0e0c9d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:11:50.136206 kubelet[2707]: I0707 00:11:50.136167 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-kube-api-access-85r7l" (OuterVolumeSpecName: "kube-api-access-85r7l") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "kube-api-access-85r7l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:11:50.137219 kubelet[2707]: I0707 00:11:50.137199 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:11:50.137331 kubelet[2707]: I0707 00:11:50.137306 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce567480-1348-435b-8dbd-3c311e0e0c9d" (UID: "ce567480-1348-435b-8dbd-3c311e0e0c9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:11:50.232772 kubelet[2707]: I0707 00:11:50.232707 2707 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xwg7\" (UniqueName: \"kubernetes.io/projected/4f61aa24-e0bc-47d9-be07-97025b447499-kube-api-access-7xwg7\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232772 kubelet[2707]: I0707 00:11:50.232752 2707 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232994 kubelet[2707]: I0707 00:11:50.232809 2707 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232994 kubelet[2707]: I0707 00:11:50.232820 2707 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce567480-1348-435b-8dbd-3c311e0e0c9d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232994 kubelet[2707]: I0707 00:11:50.232831 2707 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce567480-1348-435b-8dbd-3c311e0e0c9d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232994 kubelet[2707]: I0707 00:11:50.232840 2707 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce567480-1348-435b-8dbd-3c311e0e0c9d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232994 kubelet[2707]: I0707 00:11:50.232850 2707 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f61aa24-e0bc-47d9-be07-97025b447499-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.232994 kubelet[2707]: I0707 00:11:50.232864 2707 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-85r7l\" (UniqueName: \"kubernetes.io/projected/ce567480-1348-435b-8dbd-3c311e0e0c9d-kube-api-access-85r7l\") on node \"localhost\" DevicePath \"\"" Jul 7 00:11:50.324151 kubelet[2707]: E0707 00:11:50.323992 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:50.525343 kubelet[2707]: I0707 00:11:50.525305 2707 scope.go:117] "RemoveContainer" containerID="56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74" Jul 7 00:11:50.527413 containerd[1548]: time="2025-07-07T00:11:50.527365912Z" level=info msg="RemoveContainer for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\"" Jul 7 00:11:50.536732 systemd[1]: Removed slice kubepods-besteffort-pod4f61aa24_e0bc_47d9_be07_97025b447499.slice - libcontainer container kubepods-besteffort-pod4f61aa24_e0bc_47d9_be07_97025b447499.slice. Jul 7 00:11:50.537513 containerd[1548]: time="2025-07-07T00:11:50.537389494Z" level=info msg="RemoveContainer for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" returns successfully" Jul 7 00:11:50.538632 systemd[1]: Removed slice kubepods-burstable-podce567480_1348_435b_8dbd_3c311e0e0c9d.slice - libcontainer container kubepods-burstable-podce567480_1348_435b_8dbd_3c311e0e0c9d.slice. Jul 7 00:11:50.538762 systemd[1]: kubepods-burstable-podce567480_1348_435b_8dbd_3c311e0e0c9d.slice: Consumed 6.731s CPU time, 130.3M memory peak, 4.6M read from disk, 13.3M written to disk. Jul 7 00:11:50.543814 kubelet[2707]: I0707 00:11:50.543777 2707 scope.go:117] "RemoveContainer" containerID="56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74" Jul 7 00:11:50.544094 containerd[1548]: time="2025-07-07T00:11:50.544054051Z" level=error msg="ContainerStatus for \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\": not found" Jul 7 00:11:50.547260 kubelet[2707]: E0707 00:11:50.547212 2707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\": not found" containerID="56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74" Jul 7 00:11:50.547531 kubelet[2707]: I0707 00:11:50.547252 2707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74"} err="failed to get container status \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\": rpc error: code = NotFound desc = an error occurred when try to find container \"56f45b4a578da285a1fca85deeb5feac1a8d20e606c2ef44f6b33921dc19fb74\": not found" Jul 7 00:11:50.547531 kubelet[2707]: I0707 00:11:50.547340 2707 scope.go:117] "RemoveContainer" containerID="85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07" Jul 7 00:11:50.549895 containerd[1548]: time="2025-07-07T00:11:50.549826901Z" level=info msg="RemoveContainer for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\"" Jul 7 00:11:50.555720 containerd[1548]: time="2025-07-07T00:11:50.555665066Z" level=info msg="RemoveContainer for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" returns successfully" Jul 7 00:11:50.555906 kubelet[2707]: I0707 00:11:50.555885 2707 scope.go:117] "RemoveContainer" containerID="463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00" Jul 7 00:11:50.558157 containerd[1548]: time="2025-07-07T00:11:50.558122013Z" level=info msg="RemoveContainer for \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\"" Jul 7 00:11:50.563837 containerd[1548]: time="2025-07-07T00:11:50.563804523Z" level=info msg="RemoveContainer for \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" returns successfully" Jul 7 00:11:50.564054 kubelet[2707]: I0707 00:11:50.563970 2707 scope.go:117] "RemoveContainer" containerID="20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f" Jul 7 00:11:50.565906 containerd[1548]: time="2025-07-07T00:11:50.565879067Z" level=info msg="RemoveContainer for \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\"" Jul 7 00:11:50.570527 containerd[1548]: time="2025-07-07T00:11:50.570501662Z" level=info msg="RemoveContainer for \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" returns successfully" Jul 7 00:11:50.570660 kubelet[2707]: I0707 00:11:50.570638 2707 scope.go:117] "RemoveContainer" containerID="5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa" Jul 7 00:11:50.571933 containerd[1548]: time="2025-07-07T00:11:50.571900858Z" level=info msg="RemoveContainer for \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\"" Jul 7 00:11:50.575868 containerd[1548]: time="2025-07-07T00:11:50.575794073Z" level=info msg="RemoveContainer for \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" returns successfully" Jul 7 00:11:50.576026 kubelet[2707]: I0707 00:11:50.575997 2707 scope.go:117] "RemoveContainer" containerID="88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2" Jul 7 00:11:50.600610 containerd[1548]: time="2025-07-07T00:11:50.600574720Z" level=info msg="RemoveContainer for \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\"" Jul 7 00:11:50.604360 containerd[1548]: time="2025-07-07T00:11:50.604319655Z" level=info msg="RemoveContainer for \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" returns successfully" Jul 7 00:11:50.604499 kubelet[2707]: I0707 00:11:50.604468 2707 scope.go:117] "RemoveContainer" containerID="85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07" Jul 7 00:11:50.604704 containerd[1548]: time="2025-07-07T00:11:50.604651793Z" level=error msg="ContainerStatus for \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\": not found" Jul 7 00:11:50.604848 kubelet[2707]: E0707 00:11:50.604823 2707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\": not found" containerID="85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07" Jul 7 00:11:50.604891 kubelet[2707]: I0707 00:11:50.604857 2707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07"} err="failed to get container status \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\": rpc error: code = NotFound desc = an error occurred when try to find container \"85d0a8a5cb418be6fc8a67d0d12c1766bb53c683d5a0dc6666b6bd5385b78b07\": not found" Jul 7 00:11:50.604923 kubelet[2707]: I0707 00:11:50.604888 2707 scope.go:117] "RemoveContainer" containerID="463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00" Jul 7 00:11:50.605084 containerd[1548]: time="2025-07-07T00:11:50.605057590Z" level=error msg="ContainerStatus for \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\": not found" Jul 7 00:11:50.605183 kubelet[2707]: E0707 00:11:50.605161 2707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\": not found" containerID="463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00" Jul 7 00:11:50.605220 kubelet[2707]: I0707 00:11:50.605190 2707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00"} err="failed to get container status \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\": rpc error: code = NotFound desc = an error occurred when try to find container \"463a3042e7333907c6447d841a6dcd83583c08fa3b113e030fbc575924b20e00\": not found" Jul 7 00:11:50.605220 kubelet[2707]: I0707 00:11:50.605214 2707 scope.go:117] "RemoveContainer" containerID="20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f" Jul 7 00:11:50.605398 containerd[1548]: time="2025-07-07T00:11:50.605350355Z" level=error msg="ContainerStatus for \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\": not found" Jul 7 00:11:50.605462 kubelet[2707]: E0707 00:11:50.605437 2707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\": not found" containerID="20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f" Jul 7 00:11:50.605539 kubelet[2707]: I0707 00:11:50.605459 2707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f"} err="failed to get container status \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"20a0e2838eb22a906069e965c37bf58691a7832e1cc0c1ae98c9268e5f306e0f\": not found" Jul 7 00:11:50.605539 kubelet[2707]: I0707 00:11:50.605474 2707 scope.go:117] "RemoveContainer" containerID="5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa" Jul 7 00:11:50.605692 containerd[1548]: time="2025-07-07T00:11:50.605642326Z" level=error msg="ContainerStatus for \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\": not found" Jul 7 00:11:50.605836 kubelet[2707]: E0707 00:11:50.605809 2707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\": not found" containerID="5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa" Jul 7 00:11:50.605873 kubelet[2707]: I0707 00:11:50.605844 2707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa"} err="failed to get container status \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c7a0b721299429b01bd15e1703b51ee346c3784e0997b25f82be36d240836aa\": not found" Jul 7 00:11:50.605873 kubelet[2707]: I0707 00:11:50.605869 2707 scope.go:117] "RemoveContainer" containerID="88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2" Jul 7 00:11:50.606034 containerd[1548]: time="2025-07-07T00:11:50.606005524Z" level=error msg="ContainerStatus for \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\": not found" Jul 7 00:11:50.606137 kubelet[2707]: E0707 00:11:50.606113 2707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\": not found" containerID="88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2" Jul 7 00:11:50.606186 kubelet[2707]: I0707 00:11:50.606136 2707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2"} err="failed to get container status \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"88c18a650b66981b8ce4a120c85877262b93135188458db4668e67c89356beb2\": not found" Jul 7 00:11:50.729747 systemd[1]: var-lib-kubelet-pods-4f61aa24\x2de0bc\x2d47d9\x2dbe07\x2d97025b447499-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xwg7.mount: Deactivated successfully. Jul 7 00:11:50.729868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-826a372b3d5457ce9abdb4944e3cadc37dea6d2f87a306f2552fcaa83f89e82e-shm.mount: Deactivated successfully. Jul 7 00:11:50.729970 systemd[1]: var-lib-kubelet-pods-ce567480\x2d1348\x2d435b\x2d8dbd\x2d3c311e0e0c9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85r7l.mount: Deactivated successfully. Jul 7 00:11:50.730062 systemd[1]: var-lib-kubelet-pods-ce567480\x2d1348\x2d435b\x2d8dbd\x2d3c311e0e0c9d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:11:50.730152 systemd[1]: var-lib-kubelet-pods-ce567480\x2d1348\x2d435b\x2d8dbd\x2d3c311e0e0c9d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:11:51.328693 kubelet[2707]: I0707 00:11:51.326700 2707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f61aa24-e0bc-47d9-be07-97025b447499" path="/var/lib/kubelet/pods/4f61aa24-e0bc-47d9-be07-97025b447499/volumes" Jul 7 00:11:51.328693 kubelet[2707]: I0707 00:11:51.327336 2707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce567480-1348-435b-8dbd-3c311e0e0c9d" path="/var/lib/kubelet/pods/ce567480-1348-435b-8dbd-3c311e0e0c9d/volumes" Jul 7 00:11:51.645407 sshd[4320]: Connection closed by 10.0.0.1 port 40598 Jul 7 00:11:51.645882 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:51.657401 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:40598.service: Deactivated successfully. Jul 7 00:11:51.659466 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:11:51.660316 systemd-logind[1541]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:11:51.663502 systemd[1]: Started sshd@26-10.0.0.74:22-10.0.0.1:40612.service - OpenSSH per-connection server daemon (10.0.0.1:40612). Jul 7 00:11:51.664245 systemd-logind[1541]: Removed session 26. Jul 7 00:11:51.713498 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 40612 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:51.714881 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:51.719442 systemd-logind[1541]: New session 27 of user core. Jul 7 00:11:51.729795 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:11:52.205177 sshd[4478]: Connection closed by 10.0.0.1 port 40612 Jul 7 00:11:52.204470 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:52.213932 systemd[1]: sshd@26-10.0.0.74:22-10.0.0.1:40612.service: Deactivated successfully. Jul 7 00:11:52.216869 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:11:52.218402 systemd-logind[1541]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:11:52.224311 systemd[1]: Started sshd@27-10.0.0.74:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). Jul 7 00:11:52.225659 systemd-logind[1541]: Removed session 27. Jul 7 00:11:52.233475 kubelet[2707]: I0707 00:11:52.232824 2707 memory_manager.go:355] "RemoveStaleState removing state" podUID="ce567480-1348-435b-8dbd-3c311e0e0c9d" containerName="cilium-agent" Jul 7 00:11:52.233475 kubelet[2707]: I0707 00:11:52.232861 2707 memory_manager.go:355] "RemoveStaleState removing state" podUID="4f61aa24-e0bc-47d9-be07-97025b447499" containerName="cilium-operator" Jul 7 00:11:52.248812 systemd[1]: Created slice kubepods-burstable-pod93a14462_0010_48be_8d72_27af369fcd3b.slice - libcontainer container kubepods-burstable-pod93a14462_0010_48be_8d72_27af369fcd3b.slice. Jul 7 00:11:52.277619 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:52.279200 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:52.284339 systemd-logind[1541]: New session 28 of user core. Jul 7 00:11:52.291813 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:11:52.343780 sshd[4493]: Connection closed by 10.0.0.1 port 40628 Jul 7 00:11:52.344110 sshd-session[4490]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:52.346842 kubelet[2707]: I0707 00:11:52.346791 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-hostproc\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.346842 kubelet[2707]: I0707 00:11:52.346835 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-cilium-cgroup\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347162 kubelet[2707]: I0707 00:11:52.346858 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-cilium-run\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347162 kubelet[2707]: I0707 00:11:52.346874 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93a14462-0010-48be-8d72-27af369fcd3b-clustermesh-secrets\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347162 kubelet[2707]: I0707 00:11:52.346889 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93a14462-0010-48be-8d72-27af369fcd3b-cilium-config-path\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347162 kubelet[2707]: I0707 00:11:52.346905 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-host-proc-sys-net\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347162 kubelet[2707]: I0707 00:11:52.346919 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/93a14462-0010-48be-8d72-27af369fcd3b-cilium-ipsec-secrets\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347287 kubelet[2707]: I0707 00:11:52.346933 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-etc-cni-netd\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347287 kubelet[2707]: I0707 00:11:52.346949 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-xtables-lock\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347287 kubelet[2707]: I0707 00:11:52.346989 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-lib-modules\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347287 kubelet[2707]: I0707 00:11:52.347005 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-host-proc-sys-kernel\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347287 kubelet[2707]: I0707 00:11:52.347038 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93a14462-0010-48be-8d72-27af369fcd3b-hubble-tls\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347287 kubelet[2707]: I0707 00:11:52.347138 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hm6h\" (UniqueName: \"kubernetes.io/projected/93a14462-0010-48be-8d72-27af369fcd3b-kube-api-access-2hm6h\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347417 kubelet[2707]: I0707 00:11:52.347159 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-bpf-maps\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.347417 kubelet[2707]: I0707 00:11:52.347209 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93a14462-0010-48be-8d72-27af369fcd3b-cni-path\") pod \"cilium-jg7j5\" (UID: \"93a14462-0010-48be-8d72-27af369fcd3b\") " pod="kube-system/cilium-jg7j5" Jul 7 00:11:52.355581 systemd[1]: sshd@27-10.0.0.74:22-10.0.0.1:40628.service: Deactivated successfully. Jul 7 00:11:52.357494 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:11:52.358307 systemd-logind[1541]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:11:52.361215 systemd[1]: Started sshd@28-10.0.0.74:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Jul 7 00:11:52.361893 systemd-logind[1541]: Removed session 28. Jul 7 00:11:52.407214 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:11:52.408627 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:52.413503 systemd-logind[1541]: New session 29 of user core. Jul 7 00:11:52.422896 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 00:11:52.553175 kubelet[2707]: E0707 00:11:52.553100 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:52.553799 containerd[1548]: time="2025-07-07T00:11:52.553720507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg7j5,Uid:93a14462-0010-48be-8d72-27af369fcd3b,Namespace:kube-system,Attempt:0,}" Jul 7 00:11:52.576857 containerd[1548]: time="2025-07-07T00:11:52.576801659Z" level=info msg="connecting to shim 31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa" address="unix:///run/containerd/s/a9bb5d8e5104698c9740e38e65c54c3b712d3352e5aeec5750e7daf04d95f037" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:11:52.607805 systemd[1]: Started cri-containerd-31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa.scope - libcontainer container 31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa. Jul 7 00:11:52.636645 containerd[1548]: time="2025-07-07T00:11:52.636572523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg7j5,Uid:93a14462-0010-48be-8d72-27af369fcd3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\"" Jul 7 00:11:52.637751 kubelet[2707]: E0707 00:11:52.637475 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:52.640405 containerd[1548]: time="2025-07-07T00:11:52.640367406Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:11:52.648577 containerd[1548]: time="2025-07-07T00:11:52.648530857Z" level=info msg="Container 5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:11:52.656562 containerd[1548]: time="2025-07-07T00:11:52.656517482Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\"" Jul 7 00:11:52.657131 containerd[1548]: time="2025-07-07T00:11:52.657063197Z" level=info msg="StartContainer for \"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\"" Jul 7 00:11:52.657960 containerd[1548]: time="2025-07-07T00:11:52.657909029Z" level=info msg="connecting to shim 5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89" address="unix:///run/containerd/s/a9bb5d8e5104698c9740e38e65c54c3b712d3352e5aeec5750e7daf04d95f037" protocol=ttrpc version=3 Jul 7 00:11:52.685875 systemd[1]: Started cri-containerd-5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89.scope - libcontainer container 5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89. Jul 7 00:11:52.715097 containerd[1548]: time="2025-07-07T00:11:52.715052485Z" level=info msg="StartContainer for \"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\" returns successfully" Jul 7 00:11:52.725199 systemd[1]: cri-containerd-5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89.scope: Deactivated successfully. Jul 7 00:11:52.726596 containerd[1548]: time="2025-07-07T00:11:52.726562868Z" level=info msg="received exit event container_id:\"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\" id:\"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\" pid:4573 exited_at:{seconds:1751847112 nanos:726273199}" Jul 7 00:11:52.726789 containerd[1548]: time="2025-07-07T00:11:52.726616750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\" id:\"5c6845a6717f6ec555cfcd28b44617d5a747fcb38099372fe0d2d41481451c89\" pid:4573 exited_at:{seconds:1751847112 nanos:726273199}" Jul 7 00:11:53.539151 kubelet[2707]: E0707 00:11:53.539114 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:53.541281 containerd[1548]: time="2025-07-07T00:11:53.541214308Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:11:53.549425 containerd[1548]: time="2025-07-07T00:11:53.549389715Z" level=info msg="Container 6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:11:53.557788 containerd[1548]: time="2025-07-07T00:11:53.557736317Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\"" Jul 7 00:11:53.558370 containerd[1548]: time="2025-07-07T00:11:53.558335514Z" level=info msg="StartContainer for \"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\"" Jul 7 00:11:53.559360 containerd[1548]: time="2025-07-07T00:11:53.559324760Z" level=info msg="connecting to shim 6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8" address="unix:///run/containerd/s/a9bb5d8e5104698c9740e38e65c54c3b712d3352e5aeec5750e7daf04d95f037" protocol=ttrpc version=3 Jul 7 00:11:53.583839 systemd[1]: Started cri-containerd-6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8.scope - libcontainer container 6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8. Jul 7 00:11:53.615974 containerd[1548]: time="2025-07-07T00:11:53.615928735Z" level=info msg="StartContainer for \"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\" returns successfully" Jul 7 00:11:53.622358 systemd[1]: cri-containerd-6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8.scope: Deactivated successfully. Jul 7 00:11:53.622848 containerd[1548]: time="2025-07-07T00:11:53.622809016Z" level=info msg="received exit event container_id:\"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\" id:\"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\" pid:4618 exited_at:{seconds:1751847113 nanos:622510089}" Jul 7 00:11:53.622936 containerd[1548]: time="2025-07-07T00:11:53.622836639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\" id:\"6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8\" pid:4618 exited_at:{seconds:1751847113 nanos:622510089}" Jul 7 00:11:53.644270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a5798270a1a251e77d38b6bddaa72cef1707509ffdbde5880a84c36244122c8-rootfs.mount: Deactivated successfully. Jul 7 00:11:54.398749 kubelet[2707]: E0707 00:11:54.398646 2707 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:11:54.542279 kubelet[2707]: E0707 00:11:54.542241 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:54.544168 containerd[1548]: time="2025-07-07T00:11:54.544128902Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:11:54.699861 containerd[1548]: time="2025-07-07T00:11:54.699735496Z" level=info msg="Container 1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:11:54.701916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464242755.mount: Deactivated successfully. Jul 7 00:11:54.713555 containerd[1548]: time="2025-07-07T00:11:54.713496810Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\"" Jul 7 00:11:54.714192 containerd[1548]: time="2025-07-07T00:11:54.714159017Z" level=info msg="StartContainer for \"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\"" Jul 7 00:11:54.715824 containerd[1548]: time="2025-07-07T00:11:54.715786517Z" level=info msg="connecting to shim 1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747" address="unix:///run/containerd/s/a9bb5d8e5104698c9740e38e65c54c3b712d3352e5aeec5750e7daf04d95f037" protocol=ttrpc version=3 Jul 7 00:11:54.738922 systemd[1]: Started cri-containerd-1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747.scope - libcontainer container 1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747. Jul 7 00:11:54.785883 systemd[1]: cri-containerd-1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747.scope: Deactivated successfully. Jul 7 00:11:54.786867 containerd[1548]: time="2025-07-07T00:11:54.786821749Z" level=info msg="StartContainer for \"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\" returns successfully" Jul 7 00:11:54.787026 containerd[1548]: time="2025-07-07T00:11:54.786962296Z" level=info msg="received exit event container_id:\"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\" id:\"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\" pid:4664 exited_at:{seconds:1751847114 nanos:786757837}" Jul 7 00:11:54.787575 containerd[1548]: time="2025-07-07T00:11:54.787537668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\" id:\"1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747\" pid:4664 exited_at:{seconds:1751847114 nanos:786757837}" Jul 7 00:11:54.812245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e65dc1e7f31940a06b1cb7cef05d0d78d3e477ea860f696333feeb0ea399747-rootfs.mount: Deactivated successfully. Jul 7 00:11:55.547273 kubelet[2707]: E0707 00:11:55.547242 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:55.549692 containerd[1548]: time="2025-07-07T00:11:55.549383745Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:11:55.580743 containerd[1548]: time="2025-07-07T00:11:55.580693251Z" level=info msg="Container 03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:11:55.589454 containerd[1548]: time="2025-07-07T00:11:55.589403520Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\"" Jul 7 00:11:55.590021 containerd[1548]: time="2025-07-07T00:11:55.589984844Z" level=info msg="StartContainer for \"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\"" Jul 7 00:11:55.590959 containerd[1548]: time="2025-07-07T00:11:55.590926503Z" level=info msg="connecting to shim 03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d" address="unix:///run/containerd/s/a9bb5d8e5104698c9740e38e65c54c3b712d3352e5aeec5750e7daf04d95f037" protocol=ttrpc version=3 Jul 7 00:11:55.611805 systemd[1]: Started cri-containerd-03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d.scope - libcontainer container 03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d. Jul 7 00:11:55.637575 systemd[1]: cri-containerd-03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d.scope: Deactivated successfully. Jul 7 00:11:55.638515 containerd[1548]: time="2025-07-07T00:11:55.638479010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\" id:\"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\" pid:4703 exited_at:{seconds:1751847115 nanos:637895381}" Jul 7 00:11:55.640006 containerd[1548]: time="2025-07-07T00:11:55.639968750Z" level=info msg="received exit event container_id:\"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\" id:\"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\" pid:4703 exited_at:{seconds:1751847115 nanos:637895381}" Jul 7 00:11:55.647709 containerd[1548]: time="2025-07-07T00:11:55.647665734Z" level=info msg="StartContainer for \"03964d49542d4b597a40f655730b32d2c90a3be2cd919352983dcc385fc5858d\" returns successfully" Jul 7 00:11:56.554127 kubelet[2707]: E0707 00:11:56.554085 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:56.555728 containerd[1548]: time="2025-07-07T00:11:56.555656490Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:11:56.611130 containerd[1548]: time="2025-07-07T00:11:56.611056059Z" level=info msg="Container 3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:11:56.618928 containerd[1548]: time="2025-07-07T00:11:56.618880948Z" level=info msg="CreateContainer within sandbox \"31aaa4522427e050e9d15af6182f502c456c0c475b313456b0e3b03489fe07fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\"" Jul 7 00:11:56.619479 containerd[1548]: time="2025-07-07T00:11:56.619447956Z" level=info msg="StartContainer for \"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\"" Jul 7 00:11:56.620318 containerd[1548]: time="2025-07-07T00:11:56.620291781Z" level=info msg="connecting to shim 3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa" address="unix:///run/containerd/s/a9bb5d8e5104698c9740e38e65c54c3b712d3352e5aeec5750e7daf04d95f037" protocol=ttrpc version=3 Jul 7 00:11:56.645801 systemd[1]: Started cri-containerd-3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa.scope - libcontainer container 3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa. Jul 7 00:11:56.685086 containerd[1548]: time="2025-07-07T00:11:56.685019736Z" level=info msg="StartContainer for \"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" returns successfully" Jul 7 00:11:56.767901 containerd[1548]: time="2025-07-07T00:11:56.767834213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" id:\"c338497619e7483986a9f82d540d24fe4fbf50dd572eb61bb3300921b74a0d3e\" pid:4771 exited_at:{seconds:1751847116 nanos:767312671}" Jul 7 00:11:57.247713 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 00:11:57.560702 kubelet[2707]: E0707 00:11:57.560553 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:57.575228 kubelet[2707]: I0707 00:11:57.575161 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jg7j5" podStartSLOduration=5.575128683 podStartE2EDuration="5.575128683s" podCreationTimestamp="2025-07-07 00:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:57.574993005 +0000 UTC m=+88.340884669" watchObservedRunningTime="2025-07-07 00:11:57.575128683 +0000 UTC m=+88.341020337" Jul 7 00:11:58.562712 kubelet[2707]: E0707 00:11:58.562658 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:11:58.888098 containerd[1548]: time="2025-07-07T00:11:58.887890407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" id:\"805d13530dbf00e85993961caf8e7139252c38ec326ad02515b45722e21a550b\" pid:4912 exit_status:1 exited_at:{seconds:1751847118 nanos:887507768}" Jul 7 00:12:00.392714 systemd-networkd[1489]: lxc_health: Link UP Jul 7 00:12:00.393017 systemd-networkd[1489]: lxc_health: Gained carrier Jul 7 00:12:00.556705 kubelet[2707]: E0707 00:12:00.555658 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:12:00.568377 kubelet[2707]: E0707 00:12:00.568220 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:12:01.050035 containerd[1548]: time="2025-07-07T00:12:01.049834197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" id:\"669d6cfcbe8220774c87de7abc660ac409c13ca1e7c9292d464f79801b495ac2\" pid:5300 exited_at:{seconds:1751847121 nanos:48627024}" Jul 7 00:12:01.324639 kubelet[2707]: E0707 00:12:01.324392 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:12:01.571128 kubelet[2707]: E0707 00:12:01.571081 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:12:02.410924 systemd-networkd[1489]: lxc_health: Gained IPv6LL Jul 7 00:12:03.169458 containerd[1548]: time="2025-07-07T00:12:03.168652087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" id:\"9023c11d195f87229710d17a25c2bea306a43c478605fe9bde7425a2d69c159d\" pid:5335 exited_at:{seconds:1751847123 nanos:168221524}" Jul 7 00:12:05.256330 containerd[1548]: time="2025-07-07T00:12:05.256280252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" id:\"777a6ff27b3879e8fe4e3f4f311f6b0c0461eaa0c25a239b7f8a324934361d10\" pid:5366 exited_at:{seconds:1751847125 nanos:255841673}" Jul 7 00:12:06.323825 kubelet[2707]: E0707 00:12:06.323765 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:12:07.363976 containerd[1548]: time="2025-07-07T00:12:07.363925312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d63ffe38eedc1fea5ed375e798537fbd0f49f55af10e63537b7f8f54329e9fa\" id:\"9f41852da08bbb454ec03847d414e2d7514f21a1fb8a17c67a64dca883211caf\" pid:5392 exited_at:{seconds:1751847127 nanos:363449752}" Jul 7 00:12:07.381025 sshd[4503]: Connection closed by 10.0.0.1 port 40642 Jul 7 00:12:07.381492 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Jul 7 00:12:07.386081 systemd[1]: sshd@28-10.0.0.74:22-10.0.0.1:40642.service: Deactivated successfully. Jul 7 00:12:07.388181 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 00:12:07.388936 systemd-logind[1541]: Session 29 logged out. Waiting for processes to exit. Jul 7 00:12:07.390279 systemd-logind[1541]: Removed session 29.