Nov 6 00:26:15.603636 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:11:41 -00 2025 Nov 6 00:26:15.603671 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:26:15.603687 kernel: BIOS-provided physical RAM map: Nov 6 00:26:15.603697 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:26:15.603706 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:26:15.603715 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:26:15.603727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 6 00:26:15.603737 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 6 00:26:15.603751 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 00:26:15.603765 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 00:26:15.603775 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:26:15.603785 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:26:15.603795 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:26:15.603805 kernel: NX (Execute Disable) protection: active Nov 6 00:26:15.603821 kernel: APIC: Static calls initialized Nov 6 00:26:15.603832 kernel: SMBIOS 2.8 present. Nov 6 00:26:15.603847 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 6 00:26:15.603858 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:26:15.603868 kernel: Hypervisor detected: KVM Nov 6 00:26:15.603879 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:26:15.603890 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:26:15.603900 kernel: kvm-clock: using sched offset of 3945691460 cycles Nov 6 00:26:15.603912 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:26:15.603927 kernel: tsc: Detected 2794.748 MHz processor Nov 6 00:26:15.603939 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:26:15.603951 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:26:15.603962 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:26:15.603973 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:26:15.603985 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:26:15.603996 kernel: Using GB pages for direct mapping Nov 6 00:26:15.604028 kernel: ACPI: Early table checksum verification disabled Nov 6 00:26:15.604044 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 6 00:26:15.604071 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604083 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604095 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604105 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 6 00:26:15.604117 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604128 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604143 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604155 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:26:15.604170 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 6 00:26:15.604182 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 6 00:26:15.604193 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 6 00:26:15.604208 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 6 00:26:15.604219 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 6 00:26:15.604231 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 6 00:26:15.604243 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 6 00:26:15.604254 kernel: No NUMA configuration found Nov 6 00:26:15.604266 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 6 00:26:15.604280 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 6 00:26:15.604293 kernel: Zone ranges: Nov 6 00:26:15.604305 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:26:15.604317 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 6 00:26:15.604329 kernel: Normal empty Nov 6 00:26:15.604340 kernel: Device empty Nov 6 00:26:15.604352 kernel: Movable zone start for each node Nov 6 00:26:15.604363 kernel: Early memory node ranges Nov 6 00:26:15.604378 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:26:15.604390 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 6 00:26:15.604402 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 6 00:26:15.604414 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:26:15.604426 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:26:15.604437 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 00:26:15.604454 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:26:15.604469 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:26:15.604481 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:26:15.604493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:26:15.604508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:26:15.604520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:26:15.604532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:26:15.604544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:26:15.604559 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:26:15.604571 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:26:15.604592 kernel: TSC deadline timer available Nov 6 00:26:15.604604 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:26:15.604616 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:26:15.604627 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:26:15.604639 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:26:15.604650 kernel: CPU topo: Num. cores per package: 4 Nov 6 00:26:15.604666 kernel: CPU topo: Num. threads per package: 4 Nov 6 00:26:15.604678 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 00:26:15.604689 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:26:15.604701 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:26:15.604713 kernel: kvm-guest: setup PV sched yield Nov 6 00:26:15.604725 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 00:26:15.604737 kernel: Booting paravirtualized kernel on KVM Nov 6 00:26:15.604751 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:26:15.604763 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 00:26:15.604775 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 00:26:15.604786 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 00:26:15.604797 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 00:26:15.604808 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:26:15.604820 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:26:15.604833 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:26:15.604848 kernel: random: crng init done Nov 6 00:26:15.604860 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:26:15.604874 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:26:15.604886 kernel: Fallback order for Node 0: 0 Nov 6 00:26:15.604897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 6 00:26:15.604909 kernel: Policy zone: DMA32 Nov 6 00:26:15.604924 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:26:15.604937 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 00:26:15.604948 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 00:26:15.604960 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:26:15.604972 kernel: Dynamic Preempt: voluntary Nov 6 00:26:15.604984 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:26:15.604996 kernel: rcu: RCU event tracing is enabled. Nov 6 00:26:15.605035 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 00:26:15.605053 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:26:15.605070 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:26:15.605081 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:26:15.605092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:26:15.605104 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 00:26:15.605116 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:26:15.605128 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:26:15.605144 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:26:15.605155 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 00:26:15.605167 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:26:15.605188 kernel: Console: colour VGA+ 80x25 Nov 6 00:26:15.605202 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:26:15.605215 kernel: ACPI: Core revision 20240827 Nov 6 00:26:15.605228 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:26:15.605240 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:26:15.605252 kernel: x2apic enabled Nov 6 00:26:15.605269 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:26:15.605286 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:26:15.605299 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:26:15.605311 kernel: kvm-guest: setup PV IPIs Nov 6 00:26:15.605327 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:26:15.605340 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:26:15.605352 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 00:26:15.605364 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:26:15.605377 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:26:15.605389 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:26:15.605401 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:26:15.605417 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:26:15.605430 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:26:15.605442 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:26:15.605454 kernel: active return thunk: retbleed_return_thunk Nov 6 00:26:15.605466 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:26:15.605479 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:26:15.605491 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:26:15.605507 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:26:15.605521 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:26:15.605533 kernel: active return thunk: srso_return_thunk Nov 6 00:26:15.605546 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:26:15.605558 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:26:15.605570 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:26:15.605595 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:26:15.605612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:26:15.605624 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:26:15.605636 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:26:15.605649 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:26:15.605661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:26:15.605673 kernel: landlock: Up and running. Nov 6 00:26:15.605685 kernel: SELinux: Initializing. Nov 6 00:26:15.605706 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:26:15.605720 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:26:15.605732 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:26:15.605744 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:26:15.605756 kernel: ... version: 0 Nov 6 00:26:15.605768 kernel: ... bit width: 48 Nov 6 00:26:15.605780 kernel: ... generic registers: 6 Nov 6 00:26:15.605796 kernel: ... value mask: 0000ffffffffffff Nov 6 00:26:15.605808 kernel: ... max period: 00007fffffffffff Nov 6 00:26:15.605820 kernel: ... fixed-purpose events: 0 Nov 6 00:26:15.605832 kernel: ... event mask: 000000000000003f Nov 6 00:26:15.605844 kernel: signal: max sigframe size: 1776 Nov 6 00:26:15.605857 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:26:15.605870 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:26:15.605886 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:26:15.605899 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:26:15.605911 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:26:15.605923 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 00:26:15.605935 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 00:26:15.605947 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 00:26:15.605960 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114376K reserved, 0K cma-reserved) Nov 6 00:26:15.605977 kernel: devtmpfs: initialized Nov 6 00:26:15.605989 kernel: x86/mm: Memory block size: 128MB Nov 6 00:26:15.606002 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:26:15.606041 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 00:26:15.606053 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:26:15.606069 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:26:15.606081 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:26:15.606099 kernel: audit: type=2000 audit(1762388772.034:1): state=initialized audit_enabled=0 res=1 Nov 6 00:26:15.606111 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:26:15.606122 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:26:15.606135 kernel: cpuidle: using governor menu Nov 6 00:26:15.606146 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:26:15.606159 kernel: dca service started, version 1.12.1 Nov 6 00:26:15.606171 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 00:26:15.606188 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 00:26:15.606200 kernel: PCI: Using configuration type 1 for base access Nov 6 00:26:15.606213 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:26:15.606225 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:26:15.606237 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:26:15.606250 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:26:15.606262 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:26:15.606280 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:26:15.606292 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:26:15.606304 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:26:15.606316 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:26:15.606328 kernel: ACPI: Interpreter enabled Nov 6 00:26:15.606341 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:26:15.606353 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:26:15.606365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:26:15.606383 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:26:15.606395 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:26:15.606408 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:26:15.606769 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:26:15.607053 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:26:15.607307 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:26:15.607333 kernel: PCI host bridge to bus 0000:00 Nov 6 00:26:15.607592 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:26:15.607871 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:26:15.608119 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:26:15.608341 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 6 00:26:15.608568 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:26:15.608798 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 00:26:15.609042 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:26:15.609308 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:26:15.609567 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:26:15.609840 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 00:26:15.610119 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 00:26:15.610357 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 00:26:15.610679 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:26:15.610906 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:26:15.611114 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 00:26:15.611306 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 00:26:15.611488 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 00:26:15.611719 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:26:15.611912 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 6 00:26:15.612161 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 00:26:15.612349 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 00:26:15.612548 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:26:15.612743 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 6 00:26:15.612924 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 6 00:26:15.613136 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 6 00:26:15.613350 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 00:26:15.613544 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:26:15.613753 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:26:15.613976 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:26:15.614329 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 6 00:26:15.614515 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 6 00:26:15.614725 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:26:15.614913 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 00:26:15.614925 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:26:15.614934 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:26:15.614943 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:26:15.614957 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:26:15.614966 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:26:15.614975 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:26:15.614987 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:26:15.614996 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:26:15.615005 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:26:15.615030 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:26:15.615039 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:26:15.615048 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:26:15.615057 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:26:15.615068 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:26:15.615077 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:26:15.615086 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:26:15.615095 kernel: iommu: Default domain type: Translated Nov 6 00:26:15.615103 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:26:15.615112 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:26:15.615121 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:26:15.615133 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:26:15.615142 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 6 00:26:15.615326 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:26:15.615507 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:26:15.615698 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:26:15.615710 kernel: vgaarb: loaded Nov 6 00:26:15.615720 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:26:15.615732 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:26:15.615742 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:26:15.615751 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:26:15.615759 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:26:15.615769 kernel: pnp: PnP ACPI init Nov 6 00:26:15.615973 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 00:26:15.615991 kernel: pnp: PnP ACPI: found 6 devices Nov 6 00:26:15.616001 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:26:15.616024 kernel: NET: Registered PF_INET protocol family Nov 6 00:26:15.616033 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:26:15.616042 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:26:15.616051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:26:15.616059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:26:15.616071 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:26:15.616080 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:26:15.616089 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:26:15.616098 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:26:15.616107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:26:15.616116 kernel: NET: Registered PF_XDP protocol family Nov 6 00:26:15.616293 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:26:15.616466 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:26:15.616648 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:26:15.616816 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 6 00:26:15.616984 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 00:26:15.617310 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 00:26:15.617342 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:26:15.617351 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:26:15.617366 kernel: Initialise system trusted keyrings Nov 6 00:26:15.617378 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:26:15.617397 kernel: Key type asymmetric registered Nov 6 00:26:15.617415 kernel: Asymmetric key parser 'x509' registered Nov 6 00:26:15.617424 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:26:15.617433 kernel: io scheduler mq-deadline registered Nov 6 00:26:15.617442 kernel: io scheduler kyber registered Nov 6 00:26:15.617454 kernel: io scheduler bfq registered Nov 6 00:26:15.617462 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:26:15.617472 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:26:15.617481 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:26:15.617490 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 00:26:15.617499 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:26:15.617508 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:26:15.617519 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:26:15.617528 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:26:15.617537 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:26:15.617743 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 00:26:15.617757 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:26:15.617932 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 00:26:15.618125 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T00:26:13 UTC (1762388773) Nov 6 00:26:15.618322 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 00:26:15.618340 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:26:15.618349 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:26:15.618358 kernel: Segment Routing with IPv6 Nov 6 00:26:15.618366 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:26:15.618375 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:26:15.618384 kernel: Key type dns_resolver registered Nov 6 00:26:15.618397 kernel: IPI shorthand broadcast: enabled Nov 6 00:26:15.618406 kernel: sched_clock: Marking stable (1600004345, 309181663)->(2100961212, -191775204) Nov 6 00:26:15.618414 kernel: registered taskstats version 1 Nov 6 00:26:15.618422 kernel: Loading compiled-in X.509 certificates Nov 6 00:26:15.618431 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 92154d1aa04a8c1424f65981683e67110e07d121' Nov 6 00:26:15.618440 kernel: Demotion targets for Node 0: null Nov 6 00:26:15.618448 kernel: Key type .fscrypt registered Nov 6 00:26:15.618459 kernel: Key type fscrypt-provisioning registered Nov 6 00:26:15.618467 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:26:15.618476 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:26:15.618484 kernel: ima: No architecture policies found Nov 6 00:26:15.618493 kernel: clk: Disabling unused clocks Nov 6 00:26:15.618502 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 6 00:26:15.618511 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:26:15.618527 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 00:26:15.618535 kernel: Run /init as init process Nov 6 00:26:15.618544 kernel: with arguments: Nov 6 00:26:15.618552 kernel: /init Nov 6 00:26:15.618561 kernel: with environment: Nov 6 00:26:15.618569 kernel: HOME=/ Nov 6 00:26:15.618578 kernel: TERM=linux Nov 6 00:26:15.618596 kernel: SCSI subsystem initialized Nov 6 00:26:15.618607 kernel: libata version 3.00 loaded. Nov 6 00:26:15.618798 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:26:15.618827 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:26:15.619021 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:26:15.619208 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:26:15.619399 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:26:15.619614 kernel: scsi host0: ahci Nov 6 00:26:15.619812 kernel: scsi host1: ahci Nov 6 00:26:15.620020 kernel: scsi host2: ahci Nov 6 00:26:15.620219 kernel: scsi host3: ahci Nov 6 00:26:15.620425 kernel: scsi host4: ahci Nov 6 00:26:15.620656 kernel: scsi host5: ahci Nov 6 00:26:15.620672 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 6 00:26:15.620682 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 6 00:26:15.620691 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 6 00:26:15.620700 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 6 00:26:15.620709 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 6 00:26:15.620723 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 6 00:26:15.620731 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:26:15.620741 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:26:15.620750 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:26:15.620758 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:26:15.620768 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:26:15.620777 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:26:15.620788 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:26:15.620797 kernel: ata3.00: applying bridge limits Nov 6 00:26:15.620805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:26:15.620814 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:26:15.620823 kernel: ata3.00: configured for UDMA/100 Nov 6 00:26:15.621083 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:26:15.621301 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 00:26:15.621492 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 00:26:15.621510 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:26:15.621527 kernel: GPT:16515071 != 27000831 Nov 6 00:26:15.621539 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:26:15.621550 kernel: GPT:16515071 != 27000831 Nov 6 00:26:15.621561 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:26:15.621578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:26:15.621847 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:26:15.621862 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:26:15.622128 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:26:15.622149 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:26:15.622161 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:26:15.622172 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:26:15.622190 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 00:26:15.622206 kernel: raid6: avx2x4 gen() 29239 MB/s Nov 6 00:26:15.622219 kernel: raid6: avx2x2 gen() 30411 MB/s Nov 6 00:26:15.622233 kernel: raid6: avx2x1 gen() 20441 MB/s Nov 6 00:26:15.622249 kernel: raid6: using algorithm avx2x2 gen() 30411 MB/s Nov 6 00:26:15.622262 kernel: raid6: .... xor() 19395 MB/s, rmw enabled Nov 6 00:26:15.622274 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:26:15.622285 kernel: xor: automatically using best checksumming function avx Nov 6 00:26:15.622297 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:26:15.622309 kernel: BTRFS: device fsid 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (181) Nov 6 00:26:15.622321 kernel: BTRFS info (device dm-0): first mount of filesystem 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 Nov 6 00:26:15.622337 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:26:15.622349 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:26:15.622361 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:26:15.622373 kernel: loop: module loaded Nov 6 00:26:15.622384 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 00:26:15.622397 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:26:15.622411 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:26:15.622430 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:26:15.622443 systemd[1]: Detected virtualization kvm. Nov 6 00:26:15.622455 systemd[1]: Detected architecture x86-64. Nov 6 00:26:15.622468 systemd[1]: Running in initrd. Nov 6 00:26:15.622481 systemd[1]: No hostname configured, using default hostname. Nov 6 00:26:15.622494 systemd[1]: Hostname set to . Nov 6 00:26:15.622511 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:26:15.622523 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:26:15.622536 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:26:15.622548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:26:15.622561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:26:15.622575 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:26:15.622613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:26:15.622643 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:26:15.622674 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:26:15.622710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:26:15.622724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:26:15.622737 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:26:15.622754 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:26:15.622766 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:26:15.622779 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:26:15.622792 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:26:15.622804 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:26:15.622817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:26:15.622830 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:26:15.622845 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:26:15.622858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:26:15.622870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:26:15.622883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:26:15.622896 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:26:15.622910 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:26:15.622923 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:26:15.622940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:26:15.622954 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:26:15.622968 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:26:15.622981 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:26:15.622995 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:26:15.623027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:26:15.623045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:26:15.623059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:26:15.623072 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:26:15.623085 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:26:15.623102 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:26:15.623156 systemd-journald[315]: Collecting audit messages is disabled. Nov 6 00:26:15.623183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:26:15.623199 systemd-journald[315]: Journal started Nov 6 00:26:15.623221 systemd-journald[315]: Runtime Journal (/run/log/journal/52eea86e222c4c5ab84e9721921781c7) is 6M, max 48.3M, 42.2M free. Nov 6 00:26:15.664169 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:26:15.667096 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:26:15.667700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:26:15.759796 kernel: Bridge firewalling registered Nov 6 00:26:15.670141 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 6 00:26:15.767255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:26:15.772296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:26:15.783510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:26:15.790470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:26:15.796258 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:26:15.799562 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:26:15.801840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:26:15.817871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:26:15.834408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:26:15.837148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:26:15.845264 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:26:15.850142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:26:15.884496 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:26:15.924285 systemd-resolved[359]: Positive Trust Anchors: Nov 6 00:26:15.924301 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:26:15.924306 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:26:15.924338 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:26:15.952960 systemd-resolved[359]: Defaulting to hostname 'linux'. Nov 6 00:26:15.955100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:26:15.957905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:26:16.085078 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:26:16.108060 kernel: iscsi: registered transport (tcp) Nov 6 00:26:16.152630 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:26:16.152735 kernel: QLogic iSCSI HBA Driver Nov 6 00:26:16.196453 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:26:16.229313 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:26:16.236224 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:26:16.316310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:26:16.321636 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:26:16.323283 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:26:16.380904 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:26:16.384213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:26:16.437368 systemd-udevd[596]: Using default interface naming scheme 'v257'. Nov 6 00:26:16.455198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:26:16.462475 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:26:16.497520 dracut-pre-trigger[671]: rd.md=0: removing MD RAID activation Nov 6 00:26:16.499486 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:26:16.504802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:26:16.538685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:26:16.544979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:26:16.569462 systemd-networkd[710]: lo: Link UP Nov 6 00:26:16.569475 systemd-networkd[710]: lo: Gained carrier Nov 6 00:26:16.570426 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:26:16.573161 systemd[1]: Reached target network.target - Network. Nov 6 00:26:16.659769 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:26:16.665656 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:26:16.723509 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:26:16.766408 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:26:16.778036 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:26:16.786742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:26:16.809341 kernel: AES CTR mode by8 optimization enabled Nov 6 00:26:16.815658 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:26:16.819720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:26:16.833068 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:26:16.835126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:26:16.835318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:26:16.841140 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:26:16.841147 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:26:16.842262 systemd-networkd[710]: eth0: Link UP Nov 6 00:26:16.842535 systemd-networkd[710]: eth0: Gained carrier Nov 6 00:26:16.842562 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:26:16.842896 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:26:16.855276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:26:16.878188 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:26:16.984911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:26:17.004657 disk-uuid[836]: Primary Header is updated. Nov 6 00:26:17.004657 disk-uuid[836]: Secondary Entries is updated. Nov 6 00:26:17.004657 disk-uuid[836]: Secondary Header is updated. Nov 6 00:26:17.005311 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:26:17.009705 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:26:17.012408 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:26:17.026653 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:26:17.036870 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:26:17.070565 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:26:18.180069 disk-uuid[846]: Warning: The kernel is still using the old partition table. Nov 6 00:26:18.180069 disk-uuid[846]: The new table will be used at the next reboot or after you Nov 6 00:26:18.180069 disk-uuid[846]: run partprobe(8) or kpartx(8) Nov 6 00:26:18.180069 disk-uuid[846]: The operation has completed successfully. Nov 6 00:26:18.208247 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:26:18.208441 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:26:18.211132 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:26:18.252471 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 6 00:26:18.252574 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:26:18.252606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:26:18.258471 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:26:18.258565 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:26:18.267055 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:26:18.268166 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:26:18.270299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:26:18.597320 systemd-networkd[710]: eth0: Gained IPv6LL Nov 6 00:26:18.962156 ignition[884]: Ignition 2.22.0 Nov 6 00:26:18.962169 ignition[884]: Stage: fetch-offline Nov 6 00:26:18.962225 ignition[884]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:26:18.962238 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:26:18.962330 ignition[884]: parsed url from cmdline: "" Nov 6 00:26:18.962334 ignition[884]: no config URL provided Nov 6 00:26:18.962340 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:26:18.962353 ignition[884]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:26:18.962415 ignition[884]: op(1): [started] loading QEMU firmware config module Nov 6 00:26:18.962424 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 00:26:18.980506 ignition[884]: op(1): [finished] loading QEMU firmware config module Nov 6 00:26:19.068842 ignition[884]: parsing config with SHA512: eb6f1c9170e9f2980d9082a1c10695bb65a24ed92dcd9aa325c1d9837b533fa85619bf418b7931da0165e5eca171700088144d7df05b1ad3271d8667465e4034 Nov 6 00:26:19.076348 unknown[884]: fetched base config from "system" Nov 6 00:26:19.076365 unknown[884]: fetched user config from "qemu" Nov 6 00:26:19.086408 ignition[884]: fetch-offline: fetch-offline passed Nov 6 00:26:19.088039 ignition[884]: Ignition finished successfully Nov 6 00:26:19.092724 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:26:19.096775 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:26:19.098003 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:26:19.492639 ignition[895]: Ignition 2.22.0 Nov 6 00:26:19.492654 ignition[895]: Stage: kargs Nov 6 00:26:19.492819 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:26:19.492832 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:26:19.495439 ignition[895]: kargs: kargs passed Nov 6 00:26:19.495534 ignition[895]: Ignition finished successfully Nov 6 00:26:19.505344 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:26:19.509089 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:26:19.556387 ignition[903]: Ignition 2.22.0 Nov 6 00:26:19.556400 ignition[903]: Stage: disks Nov 6 00:26:19.556561 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:26:19.556571 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:26:19.557808 ignition[903]: disks: disks passed Nov 6 00:26:19.557867 ignition[903]: Ignition finished successfully Nov 6 00:26:19.567194 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:26:19.576097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:26:19.576866 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:26:19.582915 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:26:19.586368 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:26:19.589693 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:26:19.594409 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:26:19.647722 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 00:26:19.818828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:26:19.827405 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:26:20.091136 kernel: EXT4-fs (vda9): mounted filesystem d1cfc077-cc9a-4d2c-97de-8a87792eb8cf r/w with ordered data mode. Quota mode: none. Nov 6 00:26:20.092508 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:26:20.096022 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:26:20.101368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:26:20.105746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:26:20.109045 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:26:20.109100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:26:20.109140 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:26:20.125186 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:26:20.129830 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:26:20.137052 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Nov 6 00:26:20.141316 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:26:20.141346 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:26:20.146176 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:26:20.146211 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:26:20.148500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:26:20.197841 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:26:20.204778 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:26:20.211067 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:26:20.217618 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:26:20.364532 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:26:20.369979 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:26:20.372281 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:26:20.394803 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:26:20.397655 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:26:20.414250 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:26:20.490562 ignition[1037]: INFO : Ignition 2.22.0 Nov 6 00:26:20.490562 ignition[1037]: INFO : Stage: mount Nov 6 00:26:20.505769 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:26:20.505769 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:26:20.505769 ignition[1037]: INFO : mount: mount passed Nov 6 00:26:20.505769 ignition[1037]: INFO : Ignition finished successfully Nov 6 00:26:20.506170 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:26:20.513358 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:26:21.095472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:26:21.128067 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1049) Nov 6 00:26:21.131347 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:26:21.131380 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:26:21.135476 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:26:21.135525 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:26:21.137997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:26:21.197666 ignition[1066]: INFO : Ignition 2.22.0 Nov 6 00:26:21.197666 ignition[1066]: INFO : Stage: files Nov 6 00:26:21.200560 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:26:21.200560 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:26:21.200560 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:26:21.200560 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:26:21.200560 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:26:21.210854 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:26:21.210854 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:26:21.210854 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:26:21.210854 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 00:26:21.210854 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 6 00:26:21.203739 unknown[1066]: wrote ssh authorized keys file for user: core Nov 6 00:26:21.255328 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:26:21.438672 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 00:26:21.438672 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:26:21.438672 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 00:26:21.533171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 00:26:21.763447 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:26:21.763447 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:26:21.770171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:26:21.868241 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:26:21.871866 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:26:21.871866 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:26:21.931191 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:26:21.931191 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:26:21.943751 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 6 00:26:22.177702 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 00:26:23.038816 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:26:23.038816 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 00:26:23.045790 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:26:23.157197 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:26:23.157197 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 00:26:23.157197 ignition[1066]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 00:26:23.157197 ignition[1066]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:26:23.170220 ignition[1066]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:26:23.170220 ignition[1066]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 00:26:23.170220 ignition[1066]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:26:23.208379 ignition[1066]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:26:23.221645 ignition[1066]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:26:23.224681 ignition[1066]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:26:23.224681 ignition[1066]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:26:23.229634 ignition[1066]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:26:23.229634 ignition[1066]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:26:23.229634 ignition[1066]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:26:23.229634 ignition[1066]: INFO : files: files passed Nov 6 00:26:23.229634 ignition[1066]: INFO : Ignition finished successfully Nov 6 00:26:23.242462 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:26:23.248855 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:26:23.253747 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:26:23.275987 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:26:23.276239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:26:23.304789 initrd-setup-root-after-ignition[1097]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 00:26:23.311323 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:26:23.311323 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:26:23.317150 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:26:23.321067 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:26:23.325559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:26:23.330716 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:26:23.410465 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:26:23.410692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:26:23.412135 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:26:23.417718 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:26:23.423075 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:26:23.425722 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:26:23.477957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:26:23.480834 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:26:23.514327 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:26:23.514696 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:26:23.516202 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:26:23.517272 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:26:23.518844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:26:23.519173 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:26:23.535803 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:26:23.539656 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:26:23.543149 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:26:23.549075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:26:23.550095 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:26:23.555531 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:26:23.556123 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:26:23.556727 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:26:23.566981 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:26:23.571512 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:26:23.574763 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:26:23.578774 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:26:23.578983 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:26:23.584191 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:26:23.585222 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:26:23.590708 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:26:23.593933 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:26:23.598233 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:26:23.598492 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:26:23.604226 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:26:23.604472 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:26:23.605611 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:26:23.610471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:26:23.617109 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:26:23.617909 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:26:23.622590 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:26:23.623198 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:26:23.623340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:26:23.629710 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:26:23.629841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:26:23.633072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:26:23.633287 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:26:23.633989 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:26:23.634139 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:26:23.646277 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:26:23.646835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:26:23.647052 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:26:23.653155 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:26:23.655590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:26:23.655785 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:26:23.659395 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:26:23.659627 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:26:23.660986 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:26:23.661327 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:26:23.671233 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:26:23.671460 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:26:23.737823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:26:23.777425 ignition[1123]: INFO : Ignition 2.22.0 Nov 6 00:26:23.777425 ignition[1123]: INFO : Stage: umount Nov 6 00:26:23.826128 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:26:23.826128 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:26:23.826128 ignition[1123]: INFO : umount: umount passed Nov 6 00:26:23.826128 ignition[1123]: INFO : Ignition finished successfully Nov 6 00:26:23.831233 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:26:23.831601 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:26:23.833613 systemd[1]: Stopped target network.target - Network. Nov 6 00:26:23.837749 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:26:23.837876 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:26:23.838682 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:26:23.838755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:26:23.844699 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:26:23.844804 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:26:23.848801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:26:23.848899 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:26:23.852573 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:26:23.856607 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:26:23.871321 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:26:23.871583 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:26:23.884839 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:26:23.885045 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:26:23.893336 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:26:23.893546 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:26:23.897245 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:26:23.898560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:26:23.898612 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:26:23.902046 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:26:23.902154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:26:23.908295 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:26:23.910942 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:26:23.911044 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:26:23.912643 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:26:23.912695 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:26:23.918439 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:26:23.918510 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:26:23.921671 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:26:23.953809 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:26:23.954101 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:26:23.955377 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:26:23.955451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:26:23.955904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:26:23.955965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:26:23.957490 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:26:23.957578 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:26:23.969721 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:26:23.969837 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:26:23.971117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:26:23.971190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:26:23.988033 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:26:23.988804 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:26:23.988934 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:26:23.989843 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:26:23.989916 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:26:23.990701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:26:23.990775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:26:24.008977 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:26:24.009187 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:26:24.027649 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:26:24.027849 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:26:24.029329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:26:24.037062 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:26:24.077437 systemd[1]: Switching root. Nov 6 00:26:24.115495 systemd-journald[315]: Journal stopped Nov 6 00:26:26.359127 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 6 00:26:26.359194 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:26:26.359213 kernel: SELinux: policy capability open_perms=1 Nov 6 00:26:26.359227 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:26:26.359244 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:26:26.359260 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:26:26.359272 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:26:26.359293 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:26:26.359306 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:26:26.359318 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:26:26.359334 kernel: audit: type=1403 audit(1762388785.246:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:26:26.359349 systemd[1]: Successfully loaded SELinux policy in 72.063ms. Nov 6 00:26:26.359371 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.325ms. Nov 6 00:26:26.359386 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:26:26.359400 systemd[1]: Detected virtualization kvm. Nov 6 00:26:26.359413 systemd[1]: Detected architecture x86-64. Nov 6 00:26:26.359426 systemd[1]: Detected first boot. Nov 6 00:26:26.359444 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:26:26.359457 zram_generator::config[1170]: No configuration found. Nov 6 00:26:26.359474 kernel: Guest personality initialized and is inactive Nov 6 00:26:26.359486 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:26:26.359499 kernel: Initialized host personality Nov 6 00:26:26.359511 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:26:26.359524 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:26:26.359537 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:26:26.359552 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:26:26.359565 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:26:26.359578 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:26:26.359591 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:26:26.359604 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:26:26.359617 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:26:26.359630 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:26:26.359645 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:26:26.359659 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:26:26.359672 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:26:26.359685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:26:26.359698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:26:26.359712 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:26:26.359724 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:26:26.359739 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:26:26.359753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:26:26.359766 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:26:26.359780 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:26:26.359796 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:26:26.359810 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:26:26.359825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:26:26.359838 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:26:26.359850 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:26:26.359863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:26:26.359876 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:26:26.359889 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:26:26.359901 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:26:26.359914 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:26:26.359929 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:26:26.359942 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:26:26.359955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:26:26.359967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:26:26.359980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:26:26.359994 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:26:26.360019 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:26:26.360036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:26:26.360048 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:26:26.360062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:26.360074 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:26:26.360087 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:26:26.360100 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:26:26.360170 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:26:26.360186 systemd[1]: Reached target machines.target - Containers. Nov 6 00:26:26.360199 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:26:26.360212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:26:26.360225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:26:26.360237 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:26:26.360250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:26:26.360265 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:26:26.360286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:26:26.360300 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:26:26.360312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:26:26.360325 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:26:26.360338 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:26:26.360351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:26:26.360366 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:26:26.360379 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:26:26.360392 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:26:26.360404 kernel: fuse: init (API version 7.41) Nov 6 00:26:26.360417 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:26:26.360430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:26:26.360443 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:26:26.360466 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:26:26.360478 kernel: ACPI: bus type drm_connector registered Nov 6 00:26:26.360491 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:26:26.360504 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:26:26.360520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:26.360533 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:26:26.360546 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:26:26.360559 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:26:26.360571 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:26:26.360583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:26:26.360596 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:26:26.360611 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:26:26.360624 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:26:26.360637 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:26:26.360671 systemd-journald[1241]: Collecting audit messages is disabled. Nov 6 00:26:26.360696 systemd-journald[1241]: Journal started Nov 6 00:26:26.360721 systemd-journald[1241]: Runtime Journal (/run/log/journal/52eea86e222c4c5ab84e9721921781c7) is 6M, max 48.3M, 42.2M free. Nov 6 00:26:25.955068 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:26:25.971878 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:26:25.972701 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:26:26.373059 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:26:26.377155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:26:26.377532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:26:26.380246 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:26:26.380559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:26:26.387250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:26:26.387565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:26:26.390472 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:26:26.390768 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:26:26.393422 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:26:26.393721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:26:26.402110 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:26:26.405344 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:26:26.410667 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:26:26.413631 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:26:26.416929 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:26:26.438672 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:26:26.443910 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 00:26:26.447730 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:26:26.455248 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:26:26.457427 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:26:26.457568 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:26:26.460860 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:26:26.463481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:26:26.468369 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:26:26.471991 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:26:26.474220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:26:26.475547 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:26:26.477544 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:26:26.480406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:26:26.484123 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:26:26.487968 systemd-journald[1241]: Time spent on flushing to /var/log/journal/52eea86e222c4c5ab84e9721921781c7 is 18.136ms for 967 entries. Nov 6 00:26:26.487968 systemd-journald[1241]: System Journal (/var/log/journal/52eea86e222c4c5ab84e9721921781c7) is 8M, max 163.5M, 155.5M free. Nov 6 00:26:26.605165 systemd-journald[1241]: Received client request to flush runtime journal. Nov 6 00:26:26.605207 kernel: loop1: detected capacity change from 0 to 128048 Nov 6 00:26:26.488160 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:26:26.493618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:26:26.497284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:26:26.499610 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:26:26.561868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:26:26.581736 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:26:26.585480 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:26:26.591297 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:26:26.617445 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:26:26.620942 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:26:26.629640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:26:26.633793 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:26:26.644639 kernel: loop2: detected capacity change from 0 to 110976 Nov 6 00:26:26.650621 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:26:26.657478 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:26:26.679458 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 6 00:26:26.679485 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 6 00:26:26.686976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:26:26.702106 kernel: loop3: detected capacity change from 0 to 224512 Nov 6 00:26:26.708471 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:26:26.764040 kernel: loop4: detected capacity change from 0 to 128048 Nov 6 00:26:26.775036 kernel: loop5: detected capacity change from 0 to 110976 Nov 6 00:26:26.785048 kernel: loop6: detected capacity change from 0 to 224512 Nov 6 00:26:26.791531 (sd-merge)[1318]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 6 00:26:26.796992 (sd-merge)[1318]: Merged extensions into '/usr'. Nov 6 00:26:26.802667 systemd[1]: Reload requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:26:26.802686 systemd[1]: Reloading... Nov 6 00:26:26.815678 systemd-resolved[1304]: Positive Trust Anchors: Nov 6 00:26:26.815695 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:26:26.815701 systemd-resolved[1304]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:26:26.815739 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:26:26.820811 systemd-resolved[1304]: Defaulting to hostname 'linux'. Nov 6 00:26:26.875057 zram_generator::config[1346]: No configuration found. Nov 6 00:26:27.164350 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:26:27.164926 systemd[1]: Reloading finished in 361 ms. Nov 6 00:26:27.215802 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:26:27.218381 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:26:27.224467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:26:27.243067 systemd[1]: Starting ensure-sysext.service... Nov 6 00:26:27.245951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:26:27.284465 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:26:27.284491 systemd[1]: Reloading... Nov 6 00:26:27.288267 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:26:27.288356 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:26:27.288731 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:26:27.289746 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:26:27.291178 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:26:27.291702 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Nov 6 00:26:27.291912 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Nov 6 00:26:27.299900 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:26:27.300111 systemd-tmpfiles[1383]: Skipping /boot Nov 6 00:26:27.311775 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:26:27.311933 systemd-tmpfiles[1383]: Skipping /boot Nov 6 00:26:27.359045 zram_generator::config[1417]: No configuration found. Nov 6 00:26:27.713260 systemd[1]: Reloading finished in 428 ms. Nov 6 00:26:27.739204 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:26:27.774262 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:26:27.787134 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:26:27.790389 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:26:27.814128 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:26:27.818504 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:26:27.825343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:26:27.830307 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:26:27.837339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:27.837585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:26:27.841519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:26:27.846352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:26:27.853337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:26:27.855630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:26:27.855796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:26:27.855940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:27.859070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:26:27.859387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:26:27.863001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:26:27.863356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:26:27.870891 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:26:27.871632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:26:27.885047 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:26:27.890433 systemd-udevd[1459]: Using default interface naming scheme 'v257'. Nov 6 00:26:27.897530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:27.897925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:26:27.900065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:26:27.904188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:26:27.910318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:26:27.912273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:26:27.912391 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:26:27.912488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:27.913808 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:26:27.916953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:26:27.917215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:26:27.920265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:26:27.920491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:26:27.926468 augenrules[1487]: No rules Nov 6 00:26:27.931301 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:26:27.935633 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:26:27.942473 systemd[1]: Finished ensure-sysext.service. Nov 6 00:26:27.944442 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:26:27.944669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:26:27.947837 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:26:27.958144 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:27.958338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:26:27.961100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:26:27.965216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:26:27.969387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:26:27.971915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:26:27.972130 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:26:27.977293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:26:27.988704 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:26:27.990955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:26:27.992134 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:26:27.996641 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:26:27.997151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:26:27.999679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:26:28.000004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:26:28.003176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:26:28.003454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:26:28.014738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:26:28.015058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:26:28.015100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:26:28.106458 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:26:28.119053 systemd-networkd[1517]: lo: Link UP Nov 6 00:26:28.119073 systemd-networkd[1517]: lo: Gained carrier Nov 6 00:26:28.126109 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:26:28.131416 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:26:28.133282 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:26:28.138897 systemd[1]: Reached target network.target - Network. Nov 6 00:26:28.143680 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:26:28.148390 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:26:28.160191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:26:28.165231 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:26:28.187785 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:26:28.187861 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:26:28.187757 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:26:28.194048 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:26:28.202942 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:26:28.202956 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:26:28.206478 systemd-networkd[1517]: eth0: Link UP Nov 6 00:26:28.209389 systemd-networkd[1517]: eth0: Gained carrier Nov 6 00:26:28.209423 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:26:28.212897 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:26:28.226620 systemd-networkd[1517]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:26:28.232285 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Nov 6 00:26:28.705971 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 00:26:28.706019 systemd-timesyncd[1519]: Initial clock synchronization to Thu 2025-11-06 00:26:28.705855 UTC. Nov 6 00:26:28.706090 systemd-resolved[1304]: Clock change detected. Flushing caches. Nov 6 00:26:28.899394 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:26:28.903390 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:26:28.988144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:26:28.990332 kernel: kvm_amd: TSC scaling supported Nov 6 00:26:28.990373 kernel: kvm_amd: Nested Virtualization enabled Nov 6 00:26:28.990388 kernel: kvm_amd: Nested Paging enabled Nov 6 00:26:28.990401 kernel: kvm_amd: LBR virtualization supported Nov 6 00:26:28.997576 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 00:26:28.997733 kernel: kvm_amd: Virtual GIF supported Nov 6 00:26:29.069868 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:26:29.174953 ldconfig[1454]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:26:29.189331 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:26:29.222478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:26:29.232275 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:26:29.268213 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:26:29.271012 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:26:29.273244 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:26:29.275549 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:26:29.278183 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:26:29.280514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:26:29.282958 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:26:29.285275 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:26:29.287678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:26:29.287721 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:26:29.289479 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:26:29.292751 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:26:29.297244 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:26:29.301573 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:26:29.303939 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:26:29.306156 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:26:29.310997 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:26:29.313229 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:26:29.316187 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:26:29.318720 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:26:29.320352 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:26:29.321972 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:26:29.322004 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:26:29.323802 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:26:29.327761 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:26:29.339389 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:26:29.352563 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:26:29.356145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:26:29.358163 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:26:29.370876 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:26:29.375622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:26:29.383865 jq[1580]: false Nov 6 00:26:29.380904 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:26:29.384862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:26:29.389103 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:26:29.393349 extend-filesystems[1581]: Found /dev/vda6 Nov 6 00:26:29.398079 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:26:29.398755 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:26:29.399516 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:26:29.401396 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:26:29.405640 extend-filesystems[1581]: Found /dev/vda9 Nov 6 00:26:29.406388 extend-filesystems[1581]: Checking size of /dev/vda9 Nov 6 00:26:29.408940 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:26:29.411540 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:26:29.414467 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:26:29.414735 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:26:29.429795 jq[1597]: true Nov 6 00:26:29.427748 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:26:29.428960 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:26:29.449327 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Nov 6 00:26:29.449352 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Nov 6 00:26:29.450311 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:26:29.453149 jq[1611]: true Nov 6 00:26:29.450912 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:26:29.456338 tar[1603]: linux-amd64/LICENSE Nov 6 00:26:29.456597 tar[1603]: linux-amd64/helm Nov 6 00:26:29.462313 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:26:29.463600 update_engine[1592]: I20251106 00:26:29.462903 1592 main.cc:92] Flatcar Update Engine starting Nov 6 00:26:29.479287 dbus-daemon[1578]: [system] SELinux support is enabled Nov 6 00:26:29.479765 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:26:29.487280 update_engine[1592]: I20251106 00:26:29.486977 1592 update_check_scheduler.cc:74] Next update check in 10m13s Nov 6 00:26:29.527299 extend-filesystems[1581]: Resized partition /dev/vda9 Nov 6 00:26:29.528506 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:26:29.543634 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:26:29.543684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:26:29.546681 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:26:29.546712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:26:29.557516 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:26:29.559413 extend-filesystems[1628]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:26:29.579985 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:26:29.580026 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:26:29.580578 systemd-logind[1591]: New seat seat0. Nov 6 00:26:29.583187 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:26:29.622208 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 00:26:29.622415 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Nov 6 00:26:29.622409 oslogin_cache_refresh[1582]: Failure getting users, quitting Nov 6 00:26:29.622543 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:26:29.622543 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Nov 6 00:26:29.622435 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:26:29.622501 oslogin_cache_refresh[1582]: Refreshing group entry cache Nov 6 00:26:29.636875 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Nov 6 00:26:29.636875 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:26:29.636525 oslogin_cache_refresh[1582]: Failure getting groups, quitting Nov 6 00:26:29.636548 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:26:29.640242 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:26:29.640816 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:26:29.768552 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:26:29.779169 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:26:29.815697 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:26:29.823235 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:26:29.846875 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 6 00:26:29.860147 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:26:29.860437 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:26:29.865454 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:26:29.878084 systemd-networkd[1517]: eth0: Gained IPv6LL Nov 6 00:26:29.882549 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:26:29.908940 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:26:29.940864 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 00:26:29.967365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:29.974319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:26:29.997252 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:26:30.004735 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:26:30.013336 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:26:30.028895 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:26:30.045988 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:26:30.046356 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 00:26:30.048802 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:26:30.680534 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:26:30.693480 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:26:30.694319 extend-filesystems[1628]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:26:30.694319 extend-filesystems[1628]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:26:30.694319 extend-filesystems[1628]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 6 00:26:30.703665 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Nov 6 00:26:30.697113 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:26:30.708051 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:26:30.712064 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:26:30.714585 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:26:30.840680 containerd[1622]: time="2025-11-06T00:26:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:26:30.842852 containerd[1622]: time="2025-11-06T00:26:30.842660749Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:26:30.860860 containerd[1622]: time="2025-11-06T00:26:30.860781381Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.543µs" Nov 6 00:26:30.861030 containerd[1622]: time="2025-11-06T00:26:30.861012665Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:26:30.861088 containerd[1622]: time="2025-11-06T00:26:30.861075894Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:26:30.861394 containerd[1622]: time="2025-11-06T00:26:30.861375946Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:26:30.861458 containerd[1622]: time="2025-11-06T00:26:30.861445877Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:26:30.861525 containerd[1622]: time="2025-11-06T00:26:30.861512713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:26:30.861696 containerd[1622]: time="2025-11-06T00:26:30.861669497Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:26:30.861763 containerd[1622]: time="2025-11-06T00:26:30.861746802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:26:30.862153 containerd[1622]: time="2025-11-06T00:26:30.862129429Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:26:30.862223 containerd[1622]: time="2025-11-06T00:26:30.862210291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:26:30.862275 containerd[1622]: time="2025-11-06T00:26:30.862262108Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:26:30.862331 containerd[1622]: time="2025-11-06T00:26:30.862317622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:26:30.862499 containerd[1622]: time="2025-11-06T00:26:30.862479085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:26:30.862935 containerd[1622]: time="2025-11-06T00:26:30.862914792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:26:30.863044 containerd[1622]: time="2025-11-06T00:26:30.863027373Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:26:30.863093 containerd[1622]: time="2025-11-06T00:26:30.863081134Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:26:30.863221 containerd[1622]: time="2025-11-06T00:26:30.863166524Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:26:30.863665 containerd[1622]: time="2025-11-06T00:26:30.863639341Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:26:30.863814 containerd[1622]: time="2025-11-06T00:26:30.863798660Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:26:31.007269 containerd[1622]: time="2025-11-06T00:26:31.006994122Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:26:31.007628 containerd[1622]: time="2025-11-06T00:26:31.007588277Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007639763Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007728530Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007746213Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007763575Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007811074Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007874614Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007908167Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007941910Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.007982245Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:26:31.008866 containerd[1622]: time="2025-11-06T00:26:31.008009777Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:26:31.009290 containerd[1622]: time="2025-11-06T00:26:31.009255463Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:26:31.009290 containerd[1622]: time="2025-11-06T00:26:31.009288776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:26:31.009367 containerd[1622]: time="2025-11-06T00:26:31.009303754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:26:31.009367 containerd[1622]: time="2025-11-06T00:26:31.009314454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:26:31.010429 containerd[1622]: time="2025-11-06T00:26:31.010388047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:26:31.010487 containerd[1622]: time="2025-11-06T00:26:31.010430407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:26:31.010487 containerd[1622]: time="2025-11-06T00:26:31.010444072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:26:31.012089 containerd[1622]: time="2025-11-06T00:26:31.012059221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:26:31.012192 containerd[1622]: time="2025-11-06T00:26:31.012169688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:26:31.012215 containerd[1622]: time="2025-11-06T00:26:31.012188975Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:26:31.012237 containerd[1622]: time="2025-11-06T00:26:31.012222958Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:26:31.012343 containerd[1622]: time="2025-11-06T00:26:31.012316394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:26:31.012343 containerd[1622]: time="2025-11-06T00:26:31.012338595Z" level=info msg="Start snapshots syncer" Nov 6 00:26:31.012592 containerd[1622]: time="2025-11-06T00:26:31.012382357Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:26:31.012889 containerd[1622]: time="2025-11-06T00:26:31.012767139Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:26:31.013076 containerd[1622]: time="2025-11-06T00:26:31.013003692Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:26:31.013176 containerd[1622]: time="2025-11-06T00:26:31.013111986Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:26:31.013382 containerd[1622]: time="2025-11-06T00:26:31.013355122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:26:31.013422 containerd[1622]: time="2025-11-06T00:26:31.013381852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:26:31.013422 containerd[1622]: time="2025-11-06T00:26:31.013392592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:26:31.013422 containerd[1622]: time="2025-11-06T00:26:31.013404174Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:26:31.013422 containerd[1622]: time="2025-11-06T00:26:31.013415355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:26:31.013506 containerd[1622]: time="2025-11-06T00:26:31.013425183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:26:31.013506 containerd[1622]: time="2025-11-06T00:26:31.013439961Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:26:31.013506 containerd[1622]: time="2025-11-06T00:26:31.013468975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:26:31.013506 containerd[1622]: time="2025-11-06T00:26:31.013483282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:26:31.013506 containerd[1622]: time="2025-11-06T00:26:31.013493461Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:26:31.013602 containerd[1622]: time="2025-11-06T00:26:31.013557982Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:26:31.013602 containerd[1622]: time="2025-11-06T00:26:31.013575946Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:26:31.013674 containerd[1622]: time="2025-11-06T00:26:31.013584792Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:26:31.013696 containerd[1622]: time="2025-11-06T00:26:31.013674060Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:26:31.013696 containerd[1622]: time="2025-11-06T00:26:31.013682766Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:26:31.013696 containerd[1622]: time="2025-11-06T00:26:31.013692264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:26:31.014624 containerd[1622]: time="2025-11-06T00:26:31.013702473Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:26:31.014659 containerd[1622]: time="2025-11-06T00:26:31.014632707Z" level=info msg="runtime interface created" Nov 6 00:26:31.014659 containerd[1622]: time="2025-11-06T00:26:31.014641304Z" level=info msg="created NRI interface" Nov 6 00:26:31.014659 containerd[1622]: time="2025-11-06T00:26:31.014655741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:26:31.014723 containerd[1622]: time="2025-11-06T00:26:31.014668164Z" level=info msg="Connect containerd service" Nov 6 00:26:31.014723 containerd[1622]: time="2025-11-06T00:26:31.014697910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:26:31.021702 containerd[1622]: time="2025-11-06T00:26:31.021646761Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:26:31.211665 tar[1603]: linux-amd64/README.md Nov 6 00:26:31.330693 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:26:31.436872 containerd[1622]: time="2025-11-06T00:26:31.436703531Z" level=info msg="Start subscribing containerd event" Nov 6 00:26:31.436872 containerd[1622]: time="2025-11-06T00:26:31.436812575Z" level=info msg="Start recovering state" Nov 6 00:26:31.437069 containerd[1622]: time="2025-11-06T00:26:31.437034101Z" level=info msg="Start event monitor" Nov 6 00:26:31.437069 containerd[1622]: time="2025-11-06T00:26:31.437062013Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:26:31.437147 containerd[1622]: time="2025-11-06T00:26:31.437075559Z" level=info msg="Start streaming server" Nov 6 00:26:31.437147 containerd[1622]: time="2025-11-06T00:26:31.437088503Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:26:31.437147 containerd[1622]: time="2025-11-06T00:26:31.437098792Z" level=info msg="runtime interface starting up..." Nov 6 00:26:31.437147 containerd[1622]: time="2025-11-06T00:26:31.437106457Z" level=info msg="starting plugins..." Nov 6 00:26:31.437147 containerd[1622]: time="2025-11-06T00:26:31.437127526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:26:31.439980 containerd[1622]: time="2025-11-06T00:26:31.439733223Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:26:31.439980 containerd[1622]: time="2025-11-06T00:26:31.439882974Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:26:31.440198 containerd[1622]: time="2025-11-06T00:26:31.440068231Z" level=info msg="containerd successfully booted in 0.600164s" Nov 6 00:26:31.440464 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:26:32.574293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:32.577335 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:26:32.579440 systemd[1]: Startup finished in 3.137s (kernel) + 10.118s (initrd) + 6.931s (userspace) = 20.187s. Nov 6 00:26:32.600480 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:26:33.215852 kubelet[1719]: E1106 00:26:33.215741 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:26:33.220904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:26:33.221187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:26:33.221767 systemd[1]: kubelet.service: Consumed 2.124s CPU time, 264.4M memory peak. Nov 6 00:26:39.173444 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:26:39.175168 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:45766.service - OpenSSH per-connection server daemon (10.0.0.1:45766). Nov 6 00:26:39.283775 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 45766 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:39.286315 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:39.300435 systemd-logind[1591]: New session 1 of user core. Nov 6 00:26:39.301952 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:26:39.303408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:26:39.338198 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:26:39.341068 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:26:39.356010 (systemd)[1737]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:26:39.359268 systemd-logind[1591]: New session c1 of user core. Nov 6 00:26:39.519502 systemd[1737]: Queued start job for default target default.target. Nov 6 00:26:39.536192 systemd[1737]: Created slice app.slice - User Application Slice. Nov 6 00:26:39.536224 systemd[1737]: Reached target paths.target - Paths. Nov 6 00:26:39.536283 systemd[1737]: Reached target timers.target - Timers. Nov 6 00:26:39.538000 systemd[1737]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:26:39.549654 systemd[1737]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:26:39.549859 systemd[1737]: Reached target sockets.target - Sockets. Nov 6 00:26:39.549919 systemd[1737]: Reached target basic.target - Basic System. Nov 6 00:26:39.549980 systemd[1737]: Reached target default.target - Main User Target. Nov 6 00:26:39.550023 systemd[1737]: Startup finished in 182ms. Nov 6 00:26:39.550330 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:26:39.552176 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:26:39.619459 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:45776.service - OpenSSH per-connection server daemon (10.0.0.1:45776). Nov 6 00:26:39.680665 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 45776 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:39.682272 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:39.686638 systemd-logind[1591]: New session 2 of user core. Nov 6 00:26:39.697993 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:26:39.751207 sshd[1751]: Connection closed by 10.0.0.1 port 45776 Nov 6 00:26:39.751490 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:39.762447 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:45776.service: Deactivated successfully. Nov 6 00:26:39.764331 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:26:39.765146 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:26:39.767791 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:45782.service - OpenSSH per-connection server daemon (10.0.0.1:45782). Nov 6 00:26:39.768534 systemd-logind[1591]: Removed session 2. Nov 6 00:26:39.821925 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 45782 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:39.823262 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:39.827489 systemd-logind[1591]: New session 3 of user core. Nov 6 00:26:39.840940 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:26:39.890153 sshd[1760]: Connection closed by 10.0.0.1 port 45782 Nov 6 00:26:39.890421 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:39.898514 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:45782.service: Deactivated successfully. Nov 6 00:26:39.900596 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:26:39.901534 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:26:39.904489 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:45794.service - OpenSSH per-connection server daemon (10.0.0.1:45794). Nov 6 00:26:39.905364 systemd-logind[1591]: Removed session 3. Nov 6 00:26:39.966667 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 45794 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:39.968811 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:39.974994 systemd-logind[1591]: New session 4 of user core. Nov 6 00:26:39.993238 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:26:40.052489 sshd[1769]: Connection closed by 10.0.0.1 port 45794 Nov 6 00:26:40.053097 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:40.067471 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:45794.service: Deactivated successfully. Nov 6 00:26:40.069853 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:26:40.070637 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:26:40.073971 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:58042.service - OpenSSH per-connection server daemon (10.0.0.1:58042). Nov 6 00:26:40.074798 systemd-logind[1591]: Removed session 4. Nov 6 00:26:40.150155 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 58042 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:40.152320 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:40.157287 systemd-logind[1591]: New session 5 of user core. Nov 6 00:26:40.171048 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:26:40.237588 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:26:40.237974 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:26:40.255423 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 6 00:26:40.257493 sshd[1778]: Connection closed by 10.0.0.1 port 58042 Nov 6 00:26:40.257941 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:40.281460 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:58042.service: Deactivated successfully. Nov 6 00:26:40.283613 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:26:40.284599 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:26:40.287473 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:58050.service - OpenSSH per-connection server daemon (10.0.0.1:58050). Nov 6 00:26:40.288105 systemd-logind[1591]: Removed session 5. Nov 6 00:26:40.358014 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:40.359639 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:40.364498 systemd-logind[1591]: New session 6 of user core. Nov 6 00:26:40.378005 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:26:40.434375 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:26:40.434725 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:26:40.448233 sudo[1791]: pam_unix(sudo:session): session closed for user root Nov 6 00:26:40.459501 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:26:40.460000 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:26:40.473816 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:26:40.537348 augenrules[1813]: No rules Nov 6 00:26:40.539076 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:26:40.539374 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:26:40.540519 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 6 00:26:40.542430 sshd[1789]: Connection closed by 10.0.0.1 port 58050 Nov 6 00:26:40.542938 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:40.558582 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:58050.service: Deactivated successfully. Nov 6 00:26:40.560455 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:26:40.561362 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:26:40.564651 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:58052.service - OpenSSH per-connection server daemon (10.0.0.1:58052). Nov 6 00:26:40.565382 systemd-logind[1591]: Removed session 6. Nov 6 00:26:40.621957 sshd[1822]: Accepted publickey for core from 10.0.0.1 port 58052 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:26:40.623524 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:40.629110 systemd-logind[1591]: New session 7 of user core. Nov 6 00:26:40.647964 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:26:40.703482 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:26:40.703931 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:26:41.398482 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:26:41.424280 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:26:42.400849 dockerd[1847]: time="2025-11-06T00:26:42.399596218Z" level=info msg="Starting up" Nov 6 00:26:42.408236 dockerd[1847]: time="2025-11-06T00:26:42.408187480Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:26:42.455177 dockerd[1847]: time="2025-11-06T00:26:42.455056876Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:26:42.599101 dockerd[1847]: time="2025-11-06T00:26:42.598259342Z" level=info msg="Loading containers: start." Nov 6 00:26:42.662074 kernel: Initializing XFRM netlink socket Nov 6 00:26:43.393597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:26:43.395756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:43.466161 systemd-networkd[1517]: docker0: Link UP Nov 6 00:26:43.472437 dockerd[1847]: time="2025-11-06T00:26:43.472366515Z" level=info msg="Loading containers: done." Nov 6 00:26:43.493073 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3121290157-merged.mount: Deactivated successfully. Nov 6 00:26:43.710680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:43.717428 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:26:43.774733 kubelet[2040]: E1106 00:26:43.774651 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:26:43.781662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:26:43.781889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:26:43.782276 systemd[1]: kubelet.service: Consumed 345ms CPU time, 110.8M memory peak. Nov 6 00:26:43.815577 dockerd[1847]: time="2025-11-06T00:26:43.815192596Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:26:43.815577 dockerd[1847]: time="2025-11-06T00:26:43.815421416Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:26:43.815997 dockerd[1847]: time="2025-11-06T00:26:43.815950408Z" level=info msg="Initializing buildkit" Nov 6 00:26:43.907189 dockerd[1847]: time="2025-11-06T00:26:43.907111187Z" level=info msg="Completed buildkit initialization" Nov 6 00:26:43.914869 dockerd[1847]: time="2025-11-06T00:26:43.914753288Z" level=info msg="Daemon has completed initialization" Nov 6 00:26:43.915067 dockerd[1847]: time="2025-11-06T00:26:43.914987778Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:26:43.917339 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:26:44.829277 containerd[1622]: time="2025-11-06T00:26:44.829213938Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 6 00:26:45.697567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032968682.mount: Deactivated successfully. Nov 6 00:26:47.349340 containerd[1622]: time="2025-11-06T00:26:47.349243874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:47.350612 containerd[1622]: time="2025-11-06T00:26:47.350532661Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 6 00:26:47.353917 containerd[1622]: time="2025-11-06T00:26:47.353871543Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:47.357493 containerd[1622]: time="2025-11-06T00:26:47.357399108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:47.358728 containerd[1622]: time="2025-11-06T00:26:47.358646547Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.529372316s" Nov 6 00:26:47.358728 containerd[1622]: time="2025-11-06T00:26:47.358737077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 6 00:26:47.359798 containerd[1622]: time="2025-11-06T00:26:47.359758893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 6 00:26:49.025887 containerd[1622]: time="2025-11-06T00:26:49.025773135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:49.026521 containerd[1622]: time="2025-11-06T00:26:49.026488657Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 6 00:26:49.028048 containerd[1622]: time="2025-11-06T00:26:49.027965005Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:49.030923 containerd[1622]: time="2025-11-06T00:26:49.030871847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:49.032223 containerd[1622]: time="2025-11-06T00:26:49.032170011Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.672376393s" Nov 6 00:26:49.032264 containerd[1622]: time="2025-11-06T00:26:49.032220897Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 6 00:26:49.032909 containerd[1622]: time="2025-11-06T00:26:49.032855898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 6 00:26:51.944429 containerd[1622]: time="2025-11-06T00:26:51.944285277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:51.958169 containerd[1622]: time="2025-11-06T00:26:51.958085397Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 6 00:26:51.974990 containerd[1622]: time="2025-11-06T00:26:51.974898147Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:51.979690 containerd[1622]: time="2025-11-06T00:26:51.979610214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:51.981078 containerd[1622]: time="2025-11-06T00:26:51.980989280Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.948072398s" Nov 6 00:26:51.981078 containerd[1622]: time="2025-11-06T00:26:51.981078127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 6 00:26:51.982129 containerd[1622]: time="2025-11-06T00:26:51.982081588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 6 00:26:54.032609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:26:54.037046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:54.170093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409031652.mount: Deactivated successfully. Nov 6 00:26:54.346115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:54.384256 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:26:54.555684 kubelet[2162]: E1106 00:26:54.555590 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:26:54.561649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:26:54.562092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:26:54.562879 systemd[1]: kubelet.service: Consumed 352ms CPU time, 111.3M memory peak. Nov 6 00:26:56.189907 containerd[1622]: time="2025-11-06T00:26:56.189011374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:56.190578 containerd[1622]: time="2025-11-06T00:26:56.190215802Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 6 00:26:56.193464 containerd[1622]: time="2025-11-06T00:26:56.193391728Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:56.195941 containerd[1622]: time="2025-11-06T00:26:56.195883141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:56.196535 containerd[1622]: time="2025-11-06T00:26:56.196479309Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 4.214343839s" Nov 6 00:26:56.196535 containerd[1622]: time="2025-11-06T00:26:56.196526096Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 6 00:26:56.198504 containerd[1622]: time="2025-11-06T00:26:56.198416391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 6 00:26:57.526066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558829179.mount: Deactivated successfully. Nov 6 00:26:59.244981 containerd[1622]: time="2025-11-06T00:26:59.244891339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:59.247249 containerd[1622]: time="2025-11-06T00:26:59.247174701Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 6 00:26:59.248618 containerd[1622]: time="2025-11-06T00:26:59.248563796Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:59.252616 containerd[1622]: time="2025-11-06T00:26:59.252515768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:59.253511 containerd[1622]: time="2025-11-06T00:26:59.253451683Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.055001388s" Nov 6 00:26:59.253511 containerd[1622]: time="2025-11-06T00:26:59.253489053Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 6 00:26:59.254371 containerd[1622]: time="2025-11-06T00:26:59.254310844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:26:59.978616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583527244.mount: Deactivated successfully. Nov 6 00:26:59.990729 containerd[1622]: time="2025-11-06T00:26:59.990653870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:59.991902 containerd[1622]: time="2025-11-06T00:26:59.991863148Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:26:59.993116 containerd[1622]: time="2025-11-06T00:26:59.993068108Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:59.997218 containerd[1622]: time="2025-11-06T00:26:59.997175961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:59.998195 containerd[1622]: time="2025-11-06T00:26:59.998170366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 743.817003ms" Nov 6 00:26:59.998276 containerd[1622]: time="2025-11-06T00:26:59.998201254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:26:59.998712 containerd[1622]: time="2025-11-06T00:26:59.998674913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 6 00:27:01.992412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800607083.mount: Deactivated successfully. Nov 6 00:27:04.563047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:27:04.564953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:27:05.242244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:27:05.266291 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:27:05.561444 kubelet[2292]: E1106 00:27:05.561252 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:27:05.566083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:27:05.566378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:27:05.566913 systemd[1]: kubelet.service: Consumed 271ms CPU time, 112.2M memory peak. Nov 6 00:27:05.819126 containerd[1622]: time="2025-11-06T00:27:05.819046714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:27:05.820855 containerd[1622]: time="2025-11-06T00:27:05.820794173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 6 00:27:05.822324 containerd[1622]: time="2025-11-06T00:27:05.822269721Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:27:05.825714 containerd[1622]: time="2025-11-06T00:27:05.825657143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:27:05.826739 containerd[1622]: time="2025-11-06T00:27:05.826701465Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.827996214s" Nov 6 00:27:05.826921 containerd[1622]: time="2025-11-06T00:27:05.826740038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 6 00:27:08.916369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:27:08.916597 systemd[1]: kubelet.service: Consumed 271ms CPU time, 112.2M memory peak. Nov 6 00:27:08.919221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:27:08.950325 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit session-7.scope)... Nov 6 00:27:08.950347 systemd[1]: Reloading... Nov 6 00:27:09.052858 zram_generator::config[2375]: No configuration found. Nov 6 00:27:09.564506 systemd[1]: Reloading finished in 613 ms. Nov 6 00:27:09.649223 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:27:09.649366 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:27:09.649790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:27:09.649889 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.4M memory peak. Nov 6 00:27:09.652349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:27:10.009196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:27:10.038522 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:27:10.088118 kubelet[2422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:27:10.088118 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:27:10.088118 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:27:10.088757 kubelet[2422]: I1106 00:27:10.088126 2422 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:27:10.524789 kubelet[2422]: I1106 00:27:10.524717 2422 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 00:27:10.524789 kubelet[2422]: I1106 00:27:10.524755 2422 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:27:10.525141 kubelet[2422]: I1106 00:27:10.525100 2422 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 00:27:10.554603 kubelet[2422]: E1106 00:27:10.554547 2422 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:10.555763 kubelet[2422]: I1106 00:27:10.555735 2422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:27:10.563507 kubelet[2422]: I1106 00:27:10.563471 2422 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:27:10.569567 kubelet[2422]: I1106 00:27:10.569527 2422 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:27:10.572734 kubelet[2422]: I1106 00:27:10.572659 2422 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:27:10.572968 kubelet[2422]: I1106 00:27:10.572715 2422 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:27:10.573088 kubelet[2422]: I1106 00:27:10.572972 2422 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:27:10.573088 kubelet[2422]: I1106 00:27:10.572985 2422 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 00:27:10.573196 kubelet[2422]: I1106 00:27:10.573172 2422 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:27:10.576214 kubelet[2422]: I1106 00:27:10.576182 2422 kubelet.go:446] "Attempting to sync node with API server" Nov 6 00:27:10.576214 kubelet[2422]: I1106 00:27:10.576211 2422 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:27:10.576293 kubelet[2422]: I1106 00:27:10.576249 2422 kubelet.go:352] "Adding apiserver pod source" Nov 6 00:27:10.576293 kubelet[2422]: I1106 00:27:10.576261 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:27:10.578795 kubelet[2422]: W1106 00:27:10.578679 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:10.578795 kubelet[2422]: E1106 00:27:10.578750 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:10.579459 kubelet[2422]: I1106 00:27:10.579420 2422 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:27:10.579615 kubelet[2422]: W1106 00:27:10.579580 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:10.581326 kubelet[2422]: E1106 00:27:10.579887 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:10.581326 kubelet[2422]: I1106 00:27:10.580460 2422 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 00:27:10.581326 kubelet[2422]: W1106 00:27:10.580522 2422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:27:10.582884 kubelet[2422]: I1106 00:27:10.582857 2422 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:27:10.582976 kubelet[2422]: I1106 00:27:10.582955 2422 server.go:1287] "Started kubelet" Nov 6 00:27:10.584342 kubelet[2422]: I1106 00:27:10.584283 2422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:27:10.584679 kubelet[2422]: I1106 00:27:10.584659 2422 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:27:10.584726 kubelet[2422]: I1106 00:27:10.584311 2422 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:27:10.586761 kubelet[2422]: I1106 00:27:10.586718 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:27:10.587845 kubelet[2422]: I1106 00:27:10.586776 2422 server.go:479] "Adding debug handlers to kubelet server" Nov 6 00:27:10.587988 kubelet[2422]: I1106 00:27:10.587930 2422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:27:10.590968 kubelet[2422]: E1106 00:27:10.590719 2422 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:27:10.590968 kubelet[2422]: E1106 00:27:10.590797 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:10.590968 kubelet[2422]: I1106 00:27:10.590818 2422 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:27:10.591096 kubelet[2422]: I1106 00:27:10.591002 2422 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:27:10.591096 kubelet[2422]: I1106 00:27:10.591048 2422 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:27:10.591544 kubelet[2422]: W1106 00:27:10.591501 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:10.591623 kubelet[2422]: E1106 00:27:10.591556 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:10.591804 kubelet[2422]: E1106 00:27:10.591772 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Nov 6 00:27:10.591921 kubelet[2422]: I1106 00:27:10.591789 2422 factory.go:221] Registration of the systemd container factory successfully Nov 6 00:27:10.592047 kubelet[2422]: I1106 00:27:10.592011 2422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:27:10.593067 kubelet[2422]: I1106 00:27:10.593049 2422 factory.go:221] Registration of the containerd container factory successfully Nov 6 00:27:10.595954 kubelet[2422]: E1106 00:27:10.593515 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875434dcb0235ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:27:10.582879661 +0000 UTC m=+0.539468796,LastTimestamp:2025-11-06 00:27:10.582879661 +0000 UTC m=+0.539468796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:27:10.611585 kubelet[2422]: I1106 00:27:10.611489 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 00:27:10.612347 kubelet[2422]: I1106 00:27:10.612312 2422 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:27:10.612347 kubelet[2422]: I1106 00:27:10.612335 2422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:27:10.612427 kubelet[2422]: I1106 00:27:10.612357 2422 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:27:10.613569 kubelet[2422]: I1106 00:27:10.613539 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 00:27:10.613646 kubelet[2422]: I1106 00:27:10.613582 2422 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 00:27:10.613646 kubelet[2422]: I1106 00:27:10.613617 2422 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:27:10.613646 kubelet[2422]: I1106 00:27:10.613629 2422 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 00:27:10.613918 kubelet[2422]: E1106 00:27:10.613698 2422 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:27:10.615210 kubelet[2422]: W1106 00:27:10.615149 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:10.615360 kubelet[2422]: E1106 00:27:10.615217 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:10.691867 kubelet[2422]: E1106 00:27:10.691772 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:10.714119 kubelet[2422]: E1106 00:27:10.714044 2422 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:27:10.792743 kubelet[2422]: E1106 00:27:10.792561 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:10.793135 kubelet[2422]: E1106 00:27:10.793084 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Nov 6 00:27:10.893550 kubelet[2422]: E1106 00:27:10.893460 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:10.914769 kubelet[2422]: E1106 00:27:10.914672 2422 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:27:10.994325 kubelet[2422]: E1106 00:27:10.994226 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.095116 kubelet[2422]: E1106 00:27:11.095011 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.194141 kubelet[2422]: E1106 00:27:11.194081 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Nov 6 00:27:11.196177 kubelet[2422]: E1106 00:27:11.196132 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.296886 kubelet[2422]: E1106 00:27:11.296798 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.315057 kubelet[2422]: E1106 00:27:11.315009 2422 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:27:11.398015 kubelet[2422]: E1106 00:27:11.397808 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.498602 kubelet[2422]: E1106 00:27:11.498509 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.599404 kubelet[2422]: E1106 00:27:11.599286 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.700316 kubelet[2422]: E1106 00:27:11.700050 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.800986 kubelet[2422]: E1106 00:27:11.800890 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.901805 kubelet[2422]: E1106 00:27:11.901721 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:11.920634 kubelet[2422]: W1106 00:27:11.920562 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:11.920817 kubelet[2422]: E1106 00:27:11.920651 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:11.995026 kubelet[2422]: E1106 00:27:11.994850 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Nov 6 00:27:12.002224 kubelet[2422]: E1106 00:27:12.002156 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.015030 kubelet[2422]: W1106 00:27:12.014979 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:12.015030 kubelet[2422]: E1106 00:27:12.015034 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:12.077173 kubelet[2422]: W1106 00:27:12.077090 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:12.077173 kubelet[2422]: E1106 00:27:12.077162 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:12.103313 kubelet[2422]: E1106 00:27:12.103205 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.115483 kubelet[2422]: E1106 00:27:12.115421 2422 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:27:12.154381 kubelet[2422]: W1106 00:27:12.154282 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:12.154381 kubelet[2422]: E1106 00:27:12.154370 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:12.204324 kubelet[2422]: E1106 00:27:12.204228 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.305190 kubelet[2422]: E1106 00:27:12.304998 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.375971 kubelet[2422]: E1106 00:27:12.375773 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875434dcb0235ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:27:10.582879661 +0000 UTC m=+0.539468796,LastTimestamp:2025-11-06 00:27:10.582879661 +0000 UTC m=+0.539468796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:27:12.405221 kubelet[2422]: E1106 00:27:12.405141 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.506000 kubelet[2422]: E1106 00:27:12.505933 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.563673 kubelet[2422]: E1106 00:27:12.563636 2422 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:12.606339 kubelet[2422]: E1106 00:27:12.606282 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.707000 kubelet[2422]: E1106 00:27:12.706931 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.807762 kubelet[2422]: E1106 00:27:12.807686 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:12.908519 kubelet[2422]: E1106 00:27:12.908366 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.009144 kubelet[2422]: E1106 00:27:13.009070 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.109412 kubelet[2422]: E1106 00:27:13.109331 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.210206 kubelet[2422]: E1106 00:27:13.210013 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.310849 kubelet[2422]: E1106 00:27:13.310775 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.411517 kubelet[2422]: E1106 00:27:13.411429 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.479094 kubelet[2422]: I1106 00:27:13.478945 2422 policy_none.go:49] "None policy: Start" Nov 6 00:27:13.479094 kubelet[2422]: I1106 00:27:13.479002 2422 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:27:13.479094 kubelet[2422]: I1106 00:27:13.479024 2422 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:27:13.512138 kubelet[2422]: E1106 00:27:13.512089 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.543949 kubelet[2422]: W1106 00:27:13.543903 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:13.543949 kubelet[2422]: E1106 00:27:13.543950 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:13.596353 kubelet[2422]: E1106 00:27:13.596276 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="3.2s" Nov 6 00:27:13.612599 kubelet[2422]: E1106 00:27:13.612508 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:13.623926 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:27:13.648665 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:27:13.652697 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:27:13.662907 kubelet[2422]: I1106 00:27:13.662853 2422 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 00:27:13.663177 kubelet[2422]: I1106 00:27:13.663132 2422 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:27:13.663177 kubelet[2422]: I1106 00:27:13.663153 2422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:27:13.663558 kubelet[2422]: I1106 00:27:13.663505 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:27:13.664442 kubelet[2422]: E1106 00:27:13.664407 2422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:27:13.664577 kubelet[2422]: E1106 00:27:13.664502 2422 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:27:13.725871 systemd[1]: Created slice kubepods-burstable-podd074ae6cec90207ef29d56ed5b91e8fa.slice - libcontainer container kubepods-burstable-podd074ae6cec90207ef29d56ed5b91e8fa.slice. Nov 6 00:27:13.747691 kubelet[2422]: E1106 00:27:13.747535 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:13.751305 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 6 00:27:13.760872 kubelet[2422]: E1106 00:27:13.760838 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:13.764443 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 6 00:27:13.765319 kubelet[2422]: I1106 00:27:13.765271 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:27:13.765708 kubelet[2422]: E1106 00:27:13.765670 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Nov 6 00:27:13.766962 kubelet[2422]: E1106 00:27:13.766930 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:13.812623 kubelet[2422]: I1106 00:27:13.812544 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:13.812623 kubelet[2422]: I1106 00:27:13.812595 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:13.812623 kubelet[2422]: I1106 00:27:13.812618 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:13.812623 kubelet[2422]: I1106 00:27:13.812646 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d074ae6cec90207ef29d56ed5b91e8fa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d074ae6cec90207ef29d56ed5b91e8fa\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:13.812966 kubelet[2422]: I1106 00:27:13.812665 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:13.812966 kubelet[2422]: I1106 00:27:13.812680 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:13.812966 kubelet[2422]: I1106 00:27:13.812694 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:27:13.812966 kubelet[2422]: I1106 00:27:13.812767 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d074ae6cec90207ef29d56ed5b91e8fa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d074ae6cec90207ef29d56ed5b91e8fa\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:13.812966 kubelet[2422]: I1106 00:27:13.812797 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d074ae6cec90207ef29d56ed5b91e8fa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d074ae6cec90207ef29d56ed5b91e8fa\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:13.967927 kubelet[2422]: I1106 00:27:13.967870 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:27:13.968394 kubelet[2422]: E1106 00:27:13.968317 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Nov 6 00:27:14.048762 kubelet[2422]: E1106 00:27:14.048582 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:14.049641 containerd[1622]: time="2025-11-06T00:27:14.049543592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d074ae6cec90207ef29d56ed5b91e8fa,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:14.061974 kubelet[2422]: E1106 00:27:14.061666 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:14.062492 containerd[1622]: time="2025-11-06T00:27:14.062385602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:14.067743 kubelet[2422]: E1106 00:27:14.067707 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:14.068285 containerd[1622]: time="2025-11-06T00:27:14.068224587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:14.302443 kubelet[2422]: W1106 00:27:14.302145 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:14.302443 kubelet[2422]: E1106 00:27:14.302326 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:14.353367 containerd[1622]: time="2025-11-06T00:27:14.353142772Z" level=info msg="connecting to shim 6f1419b1ec2c13a3ab7b6a86eab0ac22a1e1d72d9a73c37a53207539fcf82c5f" address="unix:///run/containerd/s/c7c3987dcc8ae32c847ce406e06c228e6efb6398c9a1a915482d739ab4341ac0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:14.354206 containerd[1622]: time="2025-11-06T00:27:14.354165925Z" level=info msg="connecting to shim a6c9035de82f2aa001c6d11298bbcc17fa87c6314eb496e6b60188082db750ee" address="unix:///run/containerd/s/4ba9f14c8813e6ac820a0553cdf0ec6726a69dd0295683db27f48c833f2d813b" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:14.456033 kubelet[2422]: I1106 00:27:14.455518 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:27:14.456033 kubelet[2422]: E1106 00:27:14.455949 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Nov 6 00:27:14.482886 containerd[1622]: time="2025-11-06T00:27:14.482788044Z" level=info msg="connecting to shim 2c08325445e645f3425514cce9933538c41b5af5d199b2b71213e5eeffeadef9" address="unix:///run/containerd/s/93e96e489ee4e6fab49a24ee73ce41612f62ab9edde9d8f89c49a5aa13f4c5cd" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:14.497022 update_engine[1592]: I20251106 00:27:14.496902 1592 update_attempter.cc:509] Updating boot flags... Nov 6 00:27:14.539073 systemd[1]: Started cri-containerd-6f1419b1ec2c13a3ab7b6a86eab0ac22a1e1d72d9a73c37a53207539fcf82c5f.scope - libcontainer container 6f1419b1ec2c13a3ab7b6a86eab0ac22a1e1d72d9a73c37a53207539fcf82c5f. Nov 6 00:27:14.585183 systemd[1]: Started cri-containerd-a6c9035de82f2aa001c6d11298bbcc17fa87c6314eb496e6b60188082db750ee.scope - libcontainer container a6c9035de82f2aa001c6d11298bbcc17fa87c6314eb496e6b60188082db750ee. Nov 6 00:27:14.604738 kubelet[2422]: W1106 00:27:14.604644 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:14.604738 kubelet[2422]: E1106 00:27:14.604740 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:14.619633 systemd[1]: Started cri-containerd-2c08325445e645f3425514cce9933538c41b5af5d199b2b71213e5eeffeadef9.scope - libcontainer container 2c08325445e645f3425514cce9933538c41b5af5d199b2b71213e5eeffeadef9. Nov 6 00:27:14.774483 containerd[1622]: time="2025-11-06T00:27:14.774422710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d074ae6cec90207ef29d56ed5b91e8fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6c9035de82f2aa001c6d11298bbcc17fa87c6314eb496e6b60188082db750ee\"" Nov 6 00:27:14.776143 kubelet[2422]: E1106 00:27:14.776089 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:14.778651 containerd[1622]: time="2025-11-06T00:27:14.778573149Z" level=info msg="CreateContainer within sandbox \"a6c9035de82f2aa001c6d11298bbcc17fa87c6314eb496e6b60188082db750ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:27:14.781486 containerd[1622]: time="2025-11-06T00:27:14.781436756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f1419b1ec2c13a3ab7b6a86eab0ac22a1e1d72d9a73c37a53207539fcf82c5f\"" Nov 6 00:27:14.782589 kubelet[2422]: E1106 00:27:14.782547 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:14.785368 containerd[1622]: time="2025-11-06T00:27:14.785310109Z" level=info msg="CreateContainer within sandbox \"6f1419b1ec2c13a3ab7b6a86eab0ac22a1e1d72d9a73c37a53207539fcf82c5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:27:15.130314 kubelet[2422]: W1106 00:27:15.130199 2422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Nov 6 00:27:15.130314 kubelet[2422]: E1106 00:27:15.130307 2422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:27:15.257721 kubelet[2422]: I1106 00:27:15.257670 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:27:15.258171 kubelet[2422]: E1106 00:27:15.258122 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Nov 6 00:27:15.563317 containerd[1622]: time="2025-11-06T00:27:15.563270063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c08325445e645f3425514cce9933538c41b5af5d199b2b71213e5eeffeadef9\"" Nov 6 00:27:15.564787 kubelet[2422]: E1106 00:27:15.564756 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:15.567054 containerd[1622]: time="2025-11-06T00:27:15.567017932Z" level=info msg="CreateContainer within sandbox \"2c08325445e645f3425514cce9933538c41b5af5d199b2b71213e5eeffeadef9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:27:15.577114 containerd[1622]: time="2025-11-06T00:27:15.576397876Z" level=info msg="Container 8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:15.606073 containerd[1622]: time="2025-11-06T00:27:15.606004610Z" level=info msg="Container f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:15.612391 containerd[1622]: time="2025-11-06T00:27:15.612280986Z" level=info msg="CreateContainer within sandbox \"6f1419b1ec2c13a3ab7b6a86eab0ac22a1e1d72d9a73c37a53207539fcf82c5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841\"" Nov 6 00:27:15.613107 containerd[1622]: time="2025-11-06T00:27:15.613058371Z" level=info msg="Container 4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:15.614025 containerd[1622]: time="2025-11-06T00:27:15.613961884Z" level=info msg="StartContainer for \"8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841\"" Nov 6 00:27:15.615317 containerd[1622]: time="2025-11-06T00:27:15.615267101Z" level=info msg="connecting to shim 8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841" address="unix:///run/containerd/s/c7c3987dcc8ae32c847ce406e06c228e6efb6398c9a1a915482d739ab4341ac0" protocol=ttrpc version=3 Nov 6 00:27:15.635343 containerd[1622]: time="2025-11-06T00:27:15.632884830Z" level=info msg="CreateContainer within sandbox \"a6c9035de82f2aa001c6d11298bbcc17fa87c6314eb496e6b60188082db750ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3\"" Nov 6 00:27:15.635343 containerd[1622]: time="2025-11-06T00:27:15.633791080Z" level=info msg="StartContainer for \"f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3\"" Nov 6 00:27:15.636813 containerd[1622]: time="2025-11-06T00:27:15.636779299Z" level=info msg="connecting to shim f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3" address="unix:///run/containerd/s/4ba9f14c8813e6ac820a0553cdf0ec6726a69dd0295683db27f48c833f2d813b" protocol=ttrpc version=3 Nov 6 00:27:15.644139 containerd[1622]: time="2025-11-06T00:27:15.644030504Z" level=info msg="CreateContainer within sandbox \"2c08325445e645f3425514cce9933538c41b5af5d199b2b71213e5eeffeadef9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914\"" Nov 6 00:27:15.646484 containerd[1622]: time="2025-11-06T00:27:15.644625192Z" level=info msg="StartContainer for \"4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914\"" Nov 6 00:27:15.646484 containerd[1622]: time="2025-11-06T00:27:15.645916933Z" level=info msg="connecting to shim 4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914" address="unix:///run/containerd/s/93e96e489ee4e6fab49a24ee73ce41612f62ab9edde9d8f89c49a5aa13f4c5cd" protocol=ttrpc version=3 Nov 6 00:27:15.647235 systemd[1]: Started cri-containerd-8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841.scope - libcontainer container 8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841. Nov 6 00:27:15.692318 systemd[1]: Started cri-containerd-4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914.scope - libcontainer container 4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914. Nov 6 00:27:15.707130 systemd[1]: Started cri-containerd-f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3.scope - libcontainer container f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3. Nov 6 00:27:15.843156 containerd[1622]: time="2025-11-06T00:27:15.840973682Z" level=info msg="StartContainer for \"8a5ae0cacbe158a1330f7ab1db6fa5cbb2b06710f8dac35d9f26619f194b6841\" returns successfully" Nov 6 00:27:15.845896 containerd[1622]: time="2025-11-06T00:27:15.845813082Z" level=info msg="StartContainer for \"f035291f299ccdeb677408257fcf50363c1f79fb6ac538bb8439e7af28d27fc3\" returns successfully" Nov 6 00:27:15.856989 containerd[1622]: time="2025-11-06T00:27:15.856934429Z" level=info msg="StartContainer for \"4236fc40e640ef8597c455aef4e088d0c483ca1e763551317431629c31fd3914\" returns successfully" Nov 6 00:27:16.637034 kubelet[2422]: E1106 00:27:16.636980 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:16.639152 kubelet[2422]: E1106 00:27:16.637149 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:16.639152 kubelet[2422]: E1106 00:27:16.637721 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:16.639152 kubelet[2422]: E1106 00:27:16.637847 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:16.640334 kubelet[2422]: E1106 00:27:16.640289 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:16.640551 kubelet[2422]: E1106 00:27:16.640529 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:16.860650 kubelet[2422]: I1106 00:27:16.860492 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:27:17.641920 kubelet[2422]: E1106 00:27:17.641887 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:17.642429 kubelet[2422]: E1106 00:27:17.642132 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:17.642429 kubelet[2422]: E1106 00:27:17.642349 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:17.643457 kubelet[2422]: E1106 00:27:17.642883 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:17.643457 kubelet[2422]: E1106 00:27:17.643111 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:17.643457 kubelet[2422]: E1106 00:27:17.643325 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:17.687571 kubelet[2422]: E1106 00:27:17.687515 2422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 00:27:17.707786 kubelet[2422]: I1106 00:27:17.707721 2422 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:27:17.707786 kubelet[2422]: E1106 00:27:17.707763 2422 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 6 00:27:17.902286 kubelet[2422]: E1106 00:27:17.902109 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.003046 kubelet[2422]: E1106 00:27:18.002984 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.104519 kubelet[2422]: E1106 00:27:18.104445 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.205691 kubelet[2422]: E1106 00:27:18.205513 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.306174 kubelet[2422]: E1106 00:27:18.306084 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.406979 kubelet[2422]: E1106 00:27:18.406893 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.507958 kubelet[2422]: E1106 00:27:18.507776 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.608390 kubelet[2422]: E1106 00:27:18.608290 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.644321 kubelet[2422]: E1106 00:27:18.644261 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:18.644946 kubelet[2422]: E1106 00:27:18.644406 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:18.644946 kubelet[2422]: E1106 00:27:18.644622 2422 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:27:18.644946 kubelet[2422]: E1106 00:27:18.644720 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:18.709175 kubelet[2422]: E1106 00:27:18.709120 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.810056 kubelet[2422]: E1106 00:27:18.809898 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:18.910746 kubelet[2422]: E1106 00:27:18.910689 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.011484 kubelet[2422]: E1106 00:27:19.011420 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.112564 kubelet[2422]: E1106 00:27:19.112476 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.213230 kubelet[2422]: E1106 00:27:19.213152 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.314203 kubelet[2422]: E1106 00:27:19.314124 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.415128 kubelet[2422]: E1106 00:27:19.414941 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.516195 kubelet[2422]: E1106 00:27:19.516113 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.616953 kubelet[2422]: E1106 00:27:19.616874 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.717979 kubelet[2422]: E1106 00:27:19.717676 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.818409 kubelet[2422]: E1106 00:27:19.818346 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:19.919375 kubelet[2422]: E1106 00:27:19.919282 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:20.020352 kubelet[2422]: E1106 00:27:20.020174 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:20.121247 kubelet[2422]: E1106 00:27:20.121185 2422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:20.192112 kubelet[2422]: I1106 00:27:20.192021 2422 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:20.277285 kubelet[2422]: I1106 00:27:20.277090 2422 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:20.340653 kubelet[2422]: I1106 00:27:20.340590 2422 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:27:20.595130 kubelet[2422]: I1106 00:27:20.595064 2422 apiserver.go:52] "Watching apiserver" Nov 6 00:27:20.603391 kubelet[2422]: E1106 00:27:20.603351 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:20.603540 kubelet[2422]: E1106 00:27:20.603421 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:20.603540 kubelet[2422]: E1106 00:27:20.603423 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:20.692285 kubelet[2422]: I1106 00:27:20.692224 2422 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:27:20.926766 kubelet[2422]: I1106 00:27:20.926140 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.926114299 podStartE2EDuration="926.114299ms" podCreationTimestamp="2025-11-06 00:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:20.924809983 +0000 UTC m=+10.881399148" watchObservedRunningTime="2025-11-06 00:27:20.926114299 +0000 UTC m=+10.882703464" Nov 6 00:27:20.926766 kubelet[2422]: I1106 00:27:20.926337 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.926328153 podStartE2EDuration="926.328153ms" podCreationTimestamp="2025-11-06 00:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:20.884534732 +0000 UTC m=+10.841123877" watchObservedRunningTime="2025-11-06 00:27:20.926328153 +0000 UTC m=+10.882917288" Nov 6 00:27:20.943045 kubelet[2422]: I1106 00:27:20.942911 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.942890924 podStartE2EDuration="942.890924ms" podCreationTimestamp="2025-11-06 00:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:20.942679755 +0000 UTC m=+10.899268900" watchObservedRunningTime="2025-11-06 00:27:20.942890924 +0000 UTC m=+10.899480079" Nov 6 00:27:21.701585 kubelet[2422]: E1106 00:27:21.701510 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:22.589989 kubelet[2422]: E1106 00:27:22.589949 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:26.528049 systemd[1]: Reload requested from client PID 2713 ('systemctl') (unit session-7.scope)... Nov 6 00:27:26.528146 systemd[1]: Reloading... Nov 6 00:27:26.660884 zram_generator::config[2760]: No configuration found. Nov 6 00:27:27.037586 systemd[1]: Reloading finished in 508 ms. Nov 6 00:27:27.077287 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:27:27.109961 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:27:27.111480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:27:27.111556 systemd[1]: kubelet.service: Consumed 1.433s CPU time, 133.5M memory peak. Nov 6 00:27:27.117395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:27:27.519041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:27:27.528369 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:27:27.593160 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:27:27.593160 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:27:27.593160 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:27:27.593160 kubelet[2801]: I1106 00:27:27.591712 2801 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:27:27.614635 kubelet[2801]: I1106 00:27:27.612713 2801 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 00:27:27.614635 kubelet[2801]: I1106 00:27:27.614030 2801 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:27:27.616257 kubelet[2801]: I1106 00:27:27.616221 2801 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 00:27:27.619107 kubelet[2801]: I1106 00:27:27.618903 2801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 6 00:27:27.623422 kubelet[2801]: I1106 00:27:27.623323 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:27:27.635015 kubelet[2801]: I1106 00:27:27.634947 2801 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:27:27.644458 kubelet[2801]: I1106 00:27:27.644371 2801 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:27:27.644879 kubelet[2801]: I1106 00:27:27.644801 2801 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:27:27.645187 kubelet[2801]: I1106 00:27:27.644869 2801 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:27:27.645280 kubelet[2801]: I1106 00:27:27.645191 2801 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:27:27.645280 kubelet[2801]: I1106 00:27:27.645209 2801 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 00:27:27.645280 kubelet[2801]: I1106 00:27:27.645273 2801 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:27:27.646088 kubelet[2801]: I1106 00:27:27.645714 2801 kubelet.go:446] "Attempting to sync node with API server" Nov 6 00:27:27.646088 kubelet[2801]: I1106 00:27:27.645895 2801 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:27:27.646353 kubelet[2801]: I1106 00:27:27.646050 2801 kubelet.go:352] "Adding apiserver pod source" Nov 6 00:27:27.647382 kubelet[2801]: I1106 00:27:27.647322 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:27:27.648857 kubelet[2801]: I1106 00:27:27.648749 2801 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:27:27.649195 kubelet[2801]: I1106 00:27:27.649169 2801 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 00:27:27.649718 kubelet[2801]: I1106 00:27:27.649686 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:27:27.649762 kubelet[2801]: I1106 00:27:27.649729 2801 server.go:1287] "Started kubelet" Nov 6 00:27:27.658135 kubelet[2801]: I1106 00:27:27.658077 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:27:27.662508 kubelet[2801]: I1106 00:27:27.661947 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:27:27.662508 kubelet[2801]: E1106 00:27:27.662331 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:27:27.662729 kubelet[2801]: I1106 00:27:27.662561 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:27:27.662729 kubelet[2801]: I1106 00:27:27.662697 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:27:27.663119 kubelet[2801]: I1106 00:27:27.662881 2801 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:27:27.663726 kubelet[2801]: I1106 00:27:27.663643 2801 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:27:27.666139 kubelet[2801]: I1106 00:27:27.665064 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:27:27.667288 kubelet[2801]: I1106 00:27:27.667190 2801 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:27:27.668257 kubelet[2801]: I1106 00:27:27.668160 2801 factory.go:221] Registration of the systemd container factory successfully Nov 6 00:27:27.668394 kubelet[2801]: I1106 00:27:27.668328 2801 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:27:27.670656 kubelet[2801]: I1106 00:27:27.670602 2801 server.go:479] "Adding debug handlers to kubelet server" Nov 6 00:27:27.670720 kubelet[2801]: I1106 00:27:27.670706 2801 factory.go:221] Registration of the containerd container factory successfully Nov 6 00:27:27.679399 kubelet[2801]: I1106 00:27:27.679305 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 00:27:27.681602 kubelet[2801]: I1106 00:27:27.681552 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 00:27:27.681602 kubelet[2801]: I1106 00:27:27.681591 2801 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 00:27:27.681744 kubelet[2801]: I1106 00:27:27.681667 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:27:27.681744 kubelet[2801]: I1106 00:27:27.681681 2801 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 00:27:27.681789 kubelet[2801]: E1106 00:27:27.681754 2801 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:27:27.741188 kubelet[2801]: I1106 00:27:27.741128 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:27:27.741188 kubelet[2801]: I1106 00:27:27.741154 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:27:27.741188 kubelet[2801]: I1106 00:27:27.741181 2801 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:27:27.741434 kubelet[2801]: I1106 00:27:27.741414 2801 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:27:27.741477 kubelet[2801]: I1106 00:27:27.741431 2801 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:27:27.741477 kubelet[2801]: I1106 00:27:27.741453 2801 policy_none.go:49] "None policy: Start" Nov 6 00:27:27.741477 kubelet[2801]: I1106 00:27:27.741468 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:27:27.741536 kubelet[2801]: I1106 00:27:27.741482 2801 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:27:27.741614 kubelet[2801]: I1106 00:27:27.741599 2801 state_mem.go:75] "Updated machine memory state" Nov 6 00:27:27.747945 kubelet[2801]: I1106 00:27:27.747774 2801 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 00:27:27.748161 kubelet[2801]: I1106 00:27:27.748105 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:27:27.748161 kubelet[2801]: I1106 00:27:27.748119 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:27:27.748561 kubelet[2801]: I1106 00:27:27.748409 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:27:27.750557 kubelet[2801]: E1106 00:27:27.750519 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:27:27.783257 kubelet[2801]: I1106 00:27:27.783093 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:27:27.783389 kubelet[2801]: I1106 00:27:27.783353 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.783389 kubelet[2801]: I1106 00:27:27.783357 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:27.804814 kubelet[2801]: E1106 00:27:27.804716 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.806338 kubelet[2801]: E1106 00:27:27.806277 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:27.808506 kubelet[2801]: E1106 00:27:27.808444 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:27:27.860321 sudo[2835]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 00:27:27.860794 sudo[2835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 00:27:27.861956 kubelet[2801]: I1106 00:27:27.861795 2801 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:27:27.877312 kubelet[2801]: I1106 00:27:27.877101 2801 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:27:27.877312 kubelet[2801]: I1106 00:27:27.877202 2801 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:27:27.966818 kubelet[2801]: I1106 00:27:27.966429 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d074ae6cec90207ef29d56ed5b91e8fa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d074ae6cec90207ef29d56ed5b91e8fa\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:27.966818 kubelet[2801]: I1106 00:27:27.966550 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.966818 kubelet[2801]: I1106 00:27:27.966583 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.966818 kubelet[2801]: I1106 00:27:27.966606 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d074ae6cec90207ef29d56ed5b91e8fa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d074ae6cec90207ef29d56ed5b91e8fa\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:27.966818 kubelet[2801]: I1106 00:27:27.966629 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.967242 kubelet[2801]: I1106 00:27:27.966652 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.967242 kubelet[2801]: I1106 00:27:27.966679 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:27:27.967242 kubelet[2801]: I1106 00:27:27.966700 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:27:27.967242 kubelet[2801]: I1106 00:27:27.966720 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d074ae6cec90207ef29d56ed5b91e8fa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d074ae6cec90207ef29d56ed5b91e8fa\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:27:28.105899 kubelet[2801]: E1106 00:27:28.105806 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.108963 kubelet[2801]: E1106 00:27:28.106969 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.110535 kubelet[2801]: E1106 00:27:28.110334 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.405882 sudo[2835]: pam_unix(sudo:session): session closed for user root Nov 6 00:27:28.649458 kubelet[2801]: I1106 00:27:28.649346 2801 apiserver.go:52] "Watching apiserver" Nov 6 00:27:28.662844 kubelet[2801]: I1106 00:27:28.662682 2801 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:27:28.710172 kubelet[2801]: E1106 00:27:28.708603 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.710172 kubelet[2801]: E1106 00:27:28.709553 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.710172 kubelet[2801]: E1106 00:27:28.710089 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:29.714081 kubelet[2801]: E1106 00:27:29.713319 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:29.715726 kubelet[2801]: E1106 00:27:29.715140 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:29.715726 kubelet[2801]: E1106 00:27:29.715353 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:30.715066 kubelet[2801]: E1106 00:27:30.714643 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:31.155509 sudo[1826]: pam_unix(sudo:session): session closed for user root Nov 6 00:27:31.161413 sshd[1825]: Connection closed by 10.0.0.1 port 58052 Nov 6 00:27:31.175505 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:31.183329 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:58052.service: Deactivated successfully. Nov 6 00:27:31.190234 kubelet[2801]: I1106 00:27:31.190100 2801 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:27:31.190925 containerd[1622]: time="2025-11-06T00:27:31.190887099Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:27:31.191505 kubelet[2801]: I1106 00:27:31.191487 2801 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:27:31.193543 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:27:31.193864 systemd[1]: session-7.scope: Consumed 5.687s CPU time, 256.7M memory peak. Nov 6 00:27:31.201307 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:27:31.204541 systemd-logind[1591]: Removed session 7. Nov 6 00:27:33.167373 systemd[1]: Created slice kubepods-burstable-pod101cc7b4_5405_41fa_89c4_1c62c65bedce.slice - libcontainer container kubepods-burstable-pod101cc7b4_5405_41fa_89c4_1c62c65bedce.slice. Nov 6 00:27:33.209590 systemd[1]: Created slice kubepods-besteffort-pod54b25d12_0c25_46fc_97ab_088ae3f8a65c.slice - libcontainer container kubepods-besteffort-pod54b25d12_0c25_46fc_97ab_088ae3f8a65c.slice. Nov 6 00:27:33.234381 kubelet[2801]: I1106 00:27:33.234316 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-etc-cni-netd\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.234381 kubelet[2801]: I1106 00:27:33.234370 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-lib-modules\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.234381 kubelet[2801]: I1106 00:27:33.234393 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-xtables-lock\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235239 kubelet[2801]: I1106 00:27:33.234413 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-config-path\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235239 kubelet[2801]: I1106 00:27:33.234435 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-net\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235239 kubelet[2801]: I1106 00:27:33.234461 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54b25d12-0c25-46fc-97ab-088ae3f8a65c-kube-proxy\") pod \"kube-proxy-qd4x7\" (UID: \"54b25d12-0c25-46fc-97ab-088ae3f8a65c\") " pod="kube-system/kube-proxy-qd4x7" Nov 6 00:27:33.235239 kubelet[2801]: I1106 00:27:33.234479 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b25d12-0c25-46fc-97ab-088ae3f8a65c-xtables-lock\") pod \"kube-proxy-qd4x7\" (UID: \"54b25d12-0c25-46fc-97ab-088ae3f8a65c\") " pod="kube-system/kube-proxy-qd4x7" Nov 6 00:27:33.235239 kubelet[2801]: I1106 00:27:33.234502 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-run\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235239 kubelet[2801]: I1106 00:27:33.234547 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-hubble-tls\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235532 kubelet[2801]: I1106 00:27:33.234567 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b25d12-0c25-46fc-97ab-088ae3f8a65c-lib-modules\") pod \"kube-proxy-qd4x7\" (UID: \"54b25d12-0c25-46fc-97ab-088ae3f8a65c\") " pod="kube-system/kube-proxy-qd4x7" Nov 6 00:27:33.235532 kubelet[2801]: I1106 00:27:33.234586 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-kernel\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235532 kubelet[2801]: I1106 00:27:33.234606 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/101cc7b4-5405-41fa-89c4-1c62c65bedce-clustermesh-secrets\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235532 kubelet[2801]: I1106 00:27:33.234623 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-bpf-maps\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235532 kubelet[2801]: I1106 00:27:33.234639 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-hostproc\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235532 kubelet[2801]: I1106 00:27:33.234656 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cni-path\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235703 kubelet[2801]: I1106 00:27:33.234675 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qc69\" (UniqueName: \"kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-kube-api-access-7qc69\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.235703 kubelet[2801]: I1106 00:27:33.234694 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6xw\" (UniqueName: \"kubernetes.io/projected/54b25d12-0c25-46fc-97ab-088ae3f8a65c-kube-api-access-rx6xw\") pod \"kube-proxy-qd4x7\" (UID: \"54b25d12-0c25-46fc-97ab-088ae3f8a65c\") " pod="kube-system/kube-proxy-qd4x7" Nov 6 00:27:33.235703 kubelet[2801]: I1106 00:27:33.234714 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-cgroup\") pod \"cilium-l9ckn\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " pod="kube-system/cilium-l9ckn" Nov 6 00:27:33.520126 systemd[1]: Created slice kubepods-besteffort-podf0241b02_81f5_4904_8d9e_271d2b368ed2.slice - libcontainer container kubepods-besteffort-podf0241b02_81f5_4904_8d9e_271d2b368ed2.slice. Nov 6 00:27:33.532617 kubelet[2801]: E1106 00:27:33.532544 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:33.534527 containerd[1622]: time="2025-11-06T00:27:33.533950153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qd4x7,Uid:54b25d12-0c25-46fc-97ab-088ae3f8a65c,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:33.541615 kubelet[2801]: I1106 00:27:33.541489 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x582\" (UniqueName: \"kubernetes.io/projected/f0241b02-81f5-4904-8d9e-271d2b368ed2-kube-api-access-7x582\") pod \"cilium-operator-6c4d7847fc-fzlsj\" (UID: \"f0241b02-81f5-4904-8d9e-271d2b368ed2\") " pod="kube-system/cilium-operator-6c4d7847fc-fzlsj" Nov 6 00:27:33.541615 kubelet[2801]: I1106 00:27:33.541553 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0241b02-81f5-4904-8d9e-271d2b368ed2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fzlsj\" (UID: \"f0241b02-81f5-4904-8d9e-271d2b368ed2\") " pod="kube-system/cilium-operator-6c4d7847fc-fzlsj" Nov 6 00:27:33.791404 kubelet[2801]: E1106 00:27:33.786583 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:33.795632 containerd[1622]: time="2025-11-06T00:27:33.795566217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9ckn,Uid:101cc7b4-5405-41fa-89c4-1c62c65bedce,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:33.827532 kubelet[2801]: E1106 00:27:33.826971 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:33.828405 containerd[1622]: time="2025-11-06T00:27:33.828094389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fzlsj,Uid:f0241b02-81f5-4904-8d9e-271d2b368ed2,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:34.078689 containerd[1622]: time="2025-11-06T00:27:34.078001924Z" level=info msg="connecting to shim 4c162c4586b7120412705eedebf20a846e206c919029d7cd2242c2fde313c655" address="unix:///run/containerd/s/2db4e2882d452a46937e8864e049e903dc7edad8d74b0697dd0b50c17289f4ee" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:34.322292 systemd[1]: Started cri-containerd-4c162c4586b7120412705eedebf20a846e206c919029d7cd2242c2fde313c655.scope - libcontainer container 4c162c4586b7120412705eedebf20a846e206c919029d7cd2242c2fde313c655. Nov 6 00:27:34.417309 containerd[1622]: time="2025-11-06T00:27:34.416735139Z" level=info msg="connecting to shim 2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba" address="unix:///run/containerd/s/53172bbab7af5486d4eb51f452b0dd629a3b36ee928b5baa55413b4d12a58174" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:34.475250 systemd[1]: Started cri-containerd-2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba.scope - libcontainer container 2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba. Nov 6 00:27:34.606947 containerd[1622]: time="2025-11-06T00:27:34.604683312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qd4x7,Uid:54b25d12-0c25-46fc-97ab-088ae3f8a65c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c162c4586b7120412705eedebf20a846e206c919029d7cd2242c2fde313c655\"" Nov 6 00:27:34.610141 kubelet[2801]: E1106 00:27:34.609775 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:34.614552 containerd[1622]: time="2025-11-06T00:27:34.614409624Z" level=info msg="CreateContainer within sandbox \"4c162c4586b7120412705eedebf20a846e206c919029d7cd2242c2fde313c655\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:27:34.723151 containerd[1622]: time="2025-11-06T00:27:34.722177401Z" level=info msg="connecting to shim ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1" address="unix:///run/containerd/s/9759cb13bcbd5d83fd5c0be5262e94c5d1223c9afa446f30aca43594ae601a3e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:34.728716 containerd[1622]: time="2025-11-06T00:27:34.728571682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9ckn,Uid:101cc7b4-5405-41fa-89c4-1c62c65bedce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\"" Nov 6 00:27:34.734364 kubelet[2801]: E1106 00:27:34.732222 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:34.740542 containerd[1622]: time="2025-11-06T00:27:34.739570188Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 00:27:34.982017 containerd[1622]: time="2025-11-06T00:27:34.979955643Z" level=info msg="Container f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:35.034231 systemd[1]: Started cri-containerd-ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1.scope - libcontainer container ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1. Nov 6 00:27:35.073885 containerd[1622]: time="2025-11-06T00:27:35.073783460Z" level=info msg="CreateContainer within sandbox \"4c162c4586b7120412705eedebf20a846e206c919029d7cd2242c2fde313c655\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39\"" Nov 6 00:27:35.083063 containerd[1622]: time="2025-11-06T00:27:35.082983187Z" level=info msg="StartContainer for \"f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39\"" Nov 6 00:27:35.097077 containerd[1622]: time="2025-11-06T00:27:35.096679364Z" level=info msg="connecting to shim f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39" address="unix:///run/containerd/s/2db4e2882d452a46937e8864e049e903dc7edad8d74b0697dd0b50c17289f4ee" protocol=ttrpc version=3 Nov 6 00:27:35.201213 systemd[1]: Started cri-containerd-f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39.scope - libcontainer container f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39. Nov 6 00:27:35.322116 containerd[1622]: time="2025-11-06T00:27:35.322055666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fzlsj,Uid:f0241b02-81f5-4904-8d9e-271d2b368ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\"" Nov 6 00:27:35.333465 kubelet[2801]: E1106 00:27:35.332973 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:35.481185 containerd[1622]: time="2025-11-06T00:27:35.481071493Z" level=info msg="StartContainer for \"f048bcc81a6f99f745c7630cf5160ab47644ee988cec8e47309d6e99cebbfa39\" returns successfully" Nov 6 00:27:35.753676 kubelet[2801]: E1106 00:27:35.753509 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:36.774245 kubelet[2801]: E1106 00:27:36.774202 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:36.789012 kubelet[2801]: E1106 00:27:36.788199 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:36.900017 kubelet[2801]: I1106 00:27:36.899450 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qd4x7" podStartSLOduration=4.899429854 podStartE2EDuration="4.899429854s" podCreationTimestamp="2025-11-06 00:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:35.843613986 +0000 UTC m=+8.310069024" watchObservedRunningTime="2025-11-06 00:27:36.899429854 +0000 UTC m=+9.365884892" Nov 6 00:27:37.784871 kubelet[2801]: E1106 00:27:37.784351 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:38.116304 kubelet[2801]: E1106 00:27:38.115608 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:38.794526 kubelet[2801]: E1106 00:27:38.792518 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:38.794526 kubelet[2801]: E1106 00:27:38.794360 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:39.798573 kubelet[2801]: E1106 00:27:39.798490 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:46.277499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868404685.mount: Deactivated successfully. Nov 6 00:27:53.856249 containerd[1622]: time="2025-11-06T00:27:53.855738438Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:27:53.887474 containerd[1622]: time="2025-11-06T00:27:53.887378103Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 00:27:53.893877 containerd[1622]: time="2025-11-06T00:27:53.893766465Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:27:53.895398 containerd[1622]: time="2025-11-06T00:27:53.895357221Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.155731157s" Nov 6 00:27:53.895398 containerd[1622]: time="2025-11-06T00:27:53.895395112Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 00:27:53.909534 containerd[1622]: time="2025-11-06T00:27:53.909476807Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 00:27:53.909709 containerd[1622]: time="2025-11-06T00:27:53.909485814Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:27:54.171096 containerd[1622]: time="2025-11-06T00:27:54.170934781Z" level=info msg="Container 36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:54.275667 containerd[1622]: time="2025-11-06T00:27:54.275584178Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\"" Nov 6 00:27:54.276567 containerd[1622]: time="2025-11-06T00:27:54.276507591Z" level=info msg="StartContainer for \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\"" Nov 6 00:27:54.277810 containerd[1622]: time="2025-11-06T00:27:54.277767547Z" level=info msg="connecting to shim 36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011" address="unix:///run/containerd/s/53172bbab7af5486d4eb51f452b0dd629a3b36ee928b5baa55413b4d12a58174" protocol=ttrpc version=3 Nov 6 00:27:54.320133 systemd[1]: Started cri-containerd-36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011.scope - libcontainer container 36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011. Nov 6 00:27:54.374246 systemd[1]: cri-containerd-36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011.scope: Deactivated successfully. Nov 6 00:27:54.377765 containerd[1622]: time="2025-11-06T00:27:54.377720529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" id:\"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" pid:3222 exited_at:{seconds:1762388874 nanos:377117207}" Nov 6 00:27:54.403919 containerd[1622]: time="2025-11-06T00:27:54.403847982Z" level=info msg="received exit event container_id:\"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" id:\"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" pid:3222 exited_at:{seconds:1762388874 nanos:377117207}" Nov 6 00:27:54.405059 containerd[1622]: time="2025-11-06T00:27:54.405012839Z" level=info msg="StartContainer for \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" returns successfully" Nov 6 00:27:54.431886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011-rootfs.mount: Deactivated successfully. Nov 6 00:27:55.043397 kubelet[2801]: E1106 00:27:54.990258 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:55.993339 kubelet[2801]: E1106 00:27:55.993267 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:57.078062 kubelet[2801]: E1106 00:27:57.077938 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:57.088725 containerd[1622]: time="2025-11-06T00:27:57.088593047Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:27:57.390027 containerd[1622]: time="2025-11-06T00:27:57.389879479Z" level=info msg="Container d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:57.906090 containerd[1622]: time="2025-11-06T00:27:57.906036059Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\"" Nov 6 00:27:57.906580 containerd[1622]: time="2025-11-06T00:27:57.906541197Z" level=info msg="StartContainer for \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\"" Nov 6 00:27:57.907661 containerd[1622]: time="2025-11-06T00:27:57.907619110Z" level=info msg="connecting to shim d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557" address="unix:///run/containerd/s/53172bbab7af5486d4eb51f452b0dd629a3b36ee928b5baa55413b4d12a58174" protocol=ttrpc version=3 Nov 6 00:27:57.936195 systemd[1]: Started cri-containerd-d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557.scope - libcontainer container d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557. Nov 6 00:27:58.082114 containerd[1622]: time="2025-11-06T00:27:58.082062958Z" level=info msg="StartContainer for \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" returns successfully" Nov 6 00:27:58.172956 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:27:58.173342 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:27:58.173504 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:27:58.175574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:27:58.182070 systemd[1]: cri-containerd-d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557.scope: Deactivated successfully. Nov 6 00:27:58.185755 containerd[1622]: time="2025-11-06T00:27:58.185696016Z" level=info msg="received exit event container_id:\"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" id:\"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" pid:3269 exited_at:{seconds:1762388878 nanos:185392938}" Nov 6 00:27:58.186228 containerd[1622]: time="2025-11-06T00:27:58.186189413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" id:\"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" pid:3269 exited_at:{seconds:1762388878 nanos:185392938}" Nov 6 00:27:58.209181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557-rootfs.mount: Deactivated successfully. Nov 6 00:27:58.353425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:27:59.087757 kubelet[2801]: E1106 00:27:59.087704 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:59.091948 containerd[1622]: time="2025-11-06T00:27:59.091897011Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:27:59.865859 containerd[1622]: time="2025-11-06T00:27:59.865776000Z" level=info msg="Container f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:59.983398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861532040.mount: Deactivated successfully. Nov 6 00:28:00.085972 containerd[1622]: time="2025-11-06T00:28:00.085891036Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\"" Nov 6 00:28:00.087697 containerd[1622]: time="2025-11-06T00:28:00.086611077Z" level=info msg="StartContainer for \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\"" Nov 6 00:28:00.088594 containerd[1622]: time="2025-11-06T00:28:00.088502706Z" level=info msg="connecting to shim f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9" address="unix:///run/containerd/s/53172bbab7af5486d4eb51f452b0dd629a3b36ee928b5baa55413b4d12a58174" protocol=ttrpc version=3 Nov 6 00:28:00.122048 systemd[1]: Started cri-containerd-f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9.scope - libcontainer container f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9. Nov 6 00:28:00.181799 systemd[1]: cri-containerd-f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9.scope: Deactivated successfully. Nov 6 00:28:00.184081 containerd[1622]: time="2025-11-06T00:28:00.184023415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" id:\"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" pid:3320 exited_at:{seconds:1762388880 nanos:183181034}" Nov 6 00:28:00.210703 containerd[1622]: time="2025-11-06T00:28:00.210623411Z" level=info msg="received exit event container_id:\"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" id:\"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" pid:3320 exited_at:{seconds:1762388880 nanos:183181034}" Nov 6 00:28:00.213116 containerd[1622]: time="2025-11-06T00:28:00.212801028Z" level=info msg="StartContainer for \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" returns successfully" Nov 6 00:28:00.866603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9-rootfs.mount: Deactivated successfully. Nov 6 00:28:01.107404 kubelet[2801]: E1106 00:28:01.107345 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:01.110808 containerd[1622]: time="2025-11-06T00:28:01.110757473Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:28:01.536316 containerd[1622]: time="2025-11-06T00:28:01.536219278Z" level=info msg="Container e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:01.540921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267960888.mount: Deactivated successfully. Nov 6 00:28:01.640783 containerd[1622]: time="2025-11-06T00:28:01.640705343Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\"" Nov 6 00:28:01.643575 containerd[1622]: time="2025-11-06T00:28:01.643383539Z" level=info msg="StartContainer for \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\"" Nov 6 00:28:01.646264 containerd[1622]: time="2025-11-06T00:28:01.646197138Z" level=info msg="connecting to shim e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba" address="unix:///run/containerd/s/53172bbab7af5486d4eb51f452b0dd629a3b36ee928b5baa55413b4d12a58174" protocol=ttrpc version=3 Nov 6 00:28:01.686802 systemd[1]: Started cri-containerd-e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba.scope - libcontainer container e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba. Nov 6 00:28:01.740286 systemd[1]: cri-containerd-e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba.scope: Deactivated successfully. Nov 6 00:28:01.742523 containerd[1622]: time="2025-11-06T00:28:01.742450544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" id:\"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" pid:3369 exited_at:{seconds:1762388881 nanos:742020066}" Nov 6 00:28:01.786271 containerd[1622]: time="2025-11-06T00:28:01.786109940Z" level=info msg="received exit event container_id:\"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" id:\"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" pid:3369 exited_at:{seconds:1762388881 nanos:742020066}" Nov 6 00:28:01.804596 containerd[1622]: time="2025-11-06T00:28:01.803888851Z" level=info msg="StartContainer for \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" returns successfully" Nov 6 00:28:01.821277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba-rootfs.mount: Deactivated successfully. Nov 6 00:28:02.116322 kubelet[2801]: E1106 00:28:02.116273 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:02.694872 containerd[1622]: time="2025-11-06T00:28:02.694790060Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:02.707879 containerd[1622]: time="2025-11-06T00:28:02.707153058Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 00:28:02.736371 containerd[1622]: time="2025-11-06T00:28:02.736264793Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:02.738370 containerd[1622]: time="2025-11-06T00:28:02.738304971Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.828772099s" Nov 6 00:28:02.738476 containerd[1622]: time="2025-11-06T00:28:02.738374172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 00:28:02.741011 containerd[1622]: time="2025-11-06T00:28:02.740961275Z" level=info msg="CreateContainer within sandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 00:28:02.890691 containerd[1622]: time="2025-11-06T00:28:02.890637708Z" level=info msg="Container 5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:03.014236 containerd[1622]: time="2025-11-06T00:28:03.014033777Z" level=info msg="CreateContainer within sandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\"" Nov 6 00:28:03.014775 containerd[1622]: time="2025-11-06T00:28:03.014720496Z" level=info msg="StartContainer for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\"" Nov 6 00:28:03.015833 containerd[1622]: time="2025-11-06T00:28:03.015792968Z" level=info msg="connecting to shim 5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6" address="unix:///run/containerd/s/9759cb13bcbd5d83fd5c0be5262e94c5d1223c9afa446f30aca43594ae601a3e" protocol=ttrpc version=3 Nov 6 00:28:03.056236 systemd[1]: Started cri-containerd-5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6.scope - libcontainer container 5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6. Nov 6 00:28:03.155401 containerd[1622]: time="2025-11-06T00:28:03.155345656Z" level=info msg="StartContainer for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" returns successfully" Nov 6 00:28:03.166577 kubelet[2801]: E1106 00:28:03.166477 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:03.178569 containerd[1622]: time="2025-11-06T00:28:03.178508427Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:28:03.349706 containerd[1622]: time="2025-11-06T00:28:03.349637532Z" level=info msg="Container d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:03.442388 containerd[1622]: time="2025-11-06T00:28:03.442307391Z" level=info msg="CreateContainer within sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\"" Nov 6 00:28:03.442976 containerd[1622]: time="2025-11-06T00:28:03.442946872Z" level=info msg="StartContainer for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\"" Nov 6 00:28:03.445762 containerd[1622]: time="2025-11-06T00:28:03.445716037Z" level=info msg="connecting to shim d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9" address="unix:///run/containerd/s/53172bbab7af5486d4eb51f452b0dd629a3b36ee928b5baa55413b4d12a58174" protocol=ttrpc version=3 Nov 6 00:28:03.477211 systemd[1]: Started cri-containerd-d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9.scope - libcontainer container d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9. Nov 6 00:28:03.571125 containerd[1622]: time="2025-11-06T00:28:03.571059574Z" level=info msg="StartContainer for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" returns successfully" Nov 6 00:28:03.685185 containerd[1622]: time="2025-11-06T00:28:03.684982460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" id:\"3d7481f13b63ddd92b38ec245e9f2ad1846a1839dcf43591a0b9409bc8646756\" pid:3477 exited_at:{seconds:1762388883 nanos:684232995}" Nov 6 00:28:03.738784 kubelet[2801]: I1106 00:28:03.738743 2801 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:28:04.172526 kubelet[2801]: E1106 00:28:04.172479 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:04.173104 kubelet[2801]: E1106 00:28:04.172655 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:05.086014 kubelet[2801]: I1106 00:28:05.085855 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fzlsj" podStartSLOduration=4.684749839 podStartE2EDuration="32.085801869s" podCreationTimestamp="2025-11-06 00:27:33 +0000 UTC" firstStartedPulling="2025-11-06 00:27:35.338299656 +0000 UTC m=+7.804754694" lastFinishedPulling="2025-11-06 00:28:02.739351686 +0000 UTC m=+35.205806724" observedRunningTime="2025-11-06 00:28:05.065089058 +0000 UTC m=+37.531544097" watchObservedRunningTime="2025-11-06 00:28:05.085801869 +0000 UTC m=+37.552256907" Nov 6 00:28:05.095626 systemd[1]: Created slice kubepods-burstable-pod10fc79f0_acf1_4ee0_9b28_69e38ac019e9.slice - libcontainer container kubepods-burstable-pod10fc79f0_acf1_4ee0_9b28_69e38ac019e9.slice. Nov 6 00:28:05.101405 systemd[1]: Created slice kubepods-burstable-poded8004d8_134e_43b6_a61b_ac28de3c0143.slice - libcontainer container kubepods-burstable-poded8004d8_134e_43b6_a61b_ac28de3c0143.slice. Nov 6 00:28:05.133536 kubelet[2801]: I1106 00:28:05.133447 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10fc79f0-acf1-4ee0-9b28-69e38ac019e9-config-volume\") pod \"coredns-668d6bf9bc-7zpzd\" (UID: \"10fc79f0-acf1-4ee0-9b28-69e38ac019e9\") " pod="kube-system/coredns-668d6bf9bc-7zpzd" Nov 6 00:28:05.133536 kubelet[2801]: I1106 00:28:05.133498 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8004d8-134e-43b6-a61b-ac28de3c0143-config-volume\") pod \"coredns-668d6bf9bc-lr6ct\" (UID: \"ed8004d8-134e-43b6-a61b-ac28de3c0143\") " pod="kube-system/coredns-668d6bf9bc-lr6ct" Nov 6 00:28:05.133536 kubelet[2801]: I1106 00:28:05.133517 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cglwn\" (UniqueName: \"kubernetes.io/projected/10fc79f0-acf1-4ee0-9b28-69e38ac019e9-kube-api-access-cglwn\") pod \"coredns-668d6bf9bc-7zpzd\" (UID: \"10fc79f0-acf1-4ee0-9b28-69e38ac019e9\") " pod="kube-system/coredns-668d6bf9bc-7zpzd" Nov 6 00:28:05.133536 kubelet[2801]: I1106 00:28:05.133534 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vx78\" (UniqueName: \"kubernetes.io/projected/ed8004d8-134e-43b6-a61b-ac28de3c0143-kube-api-access-9vx78\") pod \"coredns-668d6bf9bc-lr6ct\" (UID: \"ed8004d8-134e-43b6-a61b-ac28de3c0143\") " pod="kube-system/coredns-668d6bf9bc-lr6ct" Nov 6 00:28:05.177503 kubelet[2801]: E1106 00:28:05.177452 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:05.179846 kubelet[2801]: E1106 00:28:05.178866 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:05.350290 kubelet[2801]: I1106 00:28:05.349300 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l9ckn" podStartSLOduration=14.182772504 podStartE2EDuration="33.349243189s" podCreationTimestamp="2025-11-06 00:27:32 +0000 UTC" firstStartedPulling="2025-11-06 00:27:34.738062241 +0000 UTC m=+7.204517279" lastFinishedPulling="2025-11-06 00:27:53.904532926 +0000 UTC m=+26.370987964" observedRunningTime="2025-11-06 00:28:05.349160554 +0000 UTC m=+37.815615592" watchObservedRunningTime="2025-11-06 00:28:05.349243189 +0000 UTC m=+37.815698227" Nov 6 00:28:05.699479 kubelet[2801]: E1106 00:28:05.699297 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:05.700251 containerd[1622]: time="2025-11-06T00:28:05.700211870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zpzd,Uid:10fc79f0-acf1-4ee0-9b28-69e38ac019e9,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:05.704776 kubelet[2801]: E1106 00:28:05.704739 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:05.705564 containerd[1622]: time="2025-11-06T00:28:05.705495531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr6ct,Uid:ed8004d8-134e-43b6-a61b-ac28de3c0143,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:06.179202 kubelet[2801]: E1106 00:28:06.179151 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:07.377564 systemd-networkd[1517]: cilium_host: Link UP Nov 6 00:28:07.378178 systemd-networkd[1517]: cilium_net: Link UP Nov 6 00:28:07.378401 systemd-networkd[1517]: cilium_net: Gained carrier Nov 6 00:28:07.378583 systemd-networkd[1517]: cilium_host: Gained carrier Nov 6 00:28:07.388040 systemd-networkd[1517]: cilium_net: Gained IPv6LL Nov 6 00:28:07.499102 systemd-networkd[1517]: cilium_vxlan: Link UP Nov 6 00:28:07.499120 systemd-networkd[1517]: cilium_vxlan: Gained carrier Nov 6 00:28:07.773870 kernel: NET: Registered PF_ALG protocol family Nov 6 00:28:07.926063 systemd-networkd[1517]: cilium_host: Gained IPv6LL Nov 6 00:28:08.541371 systemd-networkd[1517]: lxc_health: Link UP Nov 6 00:28:08.541900 systemd-networkd[1517]: lxc_health: Gained carrier Nov 6 00:28:08.679175 systemd-networkd[1517]: lxc2e3904e10004: Link UP Nov 6 00:28:08.680861 kernel: eth0: renamed from tmp8320c Nov 6 00:28:08.682056 systemd-networkd[1517]: lxc2e3904e10004: Gained carrier Nov 6 00:28:08.975914 kernel: eth0: renamed from tmp8c107 Nov 6 00:28:08.976453 systemd-networkd[1517]: lxce83cbbaf33c3: Link UP Nov 6 00:28:08.977227 systemd-networkd[1517]: lxce83cbbaf33c3: Gained carrier Nov 6 00:28:09.526033 systemd-networkd[1517]: cilium_vxlan: Gained IPv6LL Nov 6 00:28:09.787861 kubelet[2801]: E1106 00:28:09.787088 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:09.974173 systemd-networkd[1517]: lxc2e3904e10004: Gained IPv6LL Nov 6 00:28:10.102122 systemd-networkd[1517]: lxc_health: Gained IPv6LL Nov 6 00:28:10.193520 kubelet[2801]: E1106 00:28:10.193484 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:10.614176 systemd-networkd[1517]: lxce83cbbaf33c3: Gained IPv6LL Nov 6 00:28:11.195095 kubelet[2801]: E1106 00:28:11.195009 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:13.872079 containerd[1622]: time="2025-11-06T00:28:13.872020081Z" level=info msg="connecting to shim 8320c20fb6822a3878a1981d4adabf529a9e4d601e9083824385a2697edeefdc" address="unix:///run/containerd/s/8e9e6147f5d66f851ee3b396c04c4bee463893b588fc949841d88c9565009f55" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:13.884813 containerd[1622]: time="2025-11-06T00:28:13.884721529Z" level=info msg="connecting to shim 8c107ed5ca110b53944e4a3535c3fd80ae1516d0b749bd45da8bb6d1f5f4ed32" address="unix:///run/containerd/s/e9830127055d214bff8d80f28ffdb8669c7edc4d5c6b4ff283c531917353ceb6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:13.910172 systemd[1]: Started cri-containerd-8320c20fb6822a3878a1981d4adabf529a9e4d601e9083824385a2697edeefdc.scope - libcontainer container 8320c20fb6822a3878a1981d4adabf529a9e4d601e9083824385a2697edeefdc. Nov 6 00:28:13.915768 systemd[1]: Started cri-containerd-8c107ed5ca110b53944e4a3535c3fd80ae1516d0b749bd45da8bb6d1f5f4ed32.scope - libcontainer container 8c107ed5ca110b53944e4a3535c3fd80ae1516d0b749bd45da8bb6d1f5f4ed32. Nov 6 00:28:13.933803 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:28:13.938160 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:28:14.047660 containerd[1622]: time="2025-11-06T00:28:14.047580084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zpzd,Uid:10fc79f0-acf1-4ee0-9b28-69e38ac019e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8320c20fb6822a3878a1981d4adabf529a9e4d601e9083824385a2697edeefdc\"" Nov 6 00:28:14.052168 kubelet[2801]: E1106 00:28:14.052129 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:14.053932 containerd[1622]: time="2025-11-06T00:28:14.053868553Z" level=info msg="CreateContainer within sandbox \"8320c20fb6822a3878a1981d4adabf529a9e4d601e9083824385a2697edeefdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:28:14.076586 containerd[1622]: time="2025-11-06T00:28:14.076510750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr6ct,Uid:ed8004d8-134e-43b6-a61b-ac28de3c0143,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c107ed5ca110b53944e4a3535c3fd80ae1516d0b749bd45da8bb6d1f5f4ed32\"" Nov 6 00:28:14.105907 containerd[1622]: time="2025-11-06T00:28:14.105293582Z" level=info msg="Container 67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:14.110469 kubelet[2801]: E1106 00:28:14.110396 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:14.112902 containerd[1622]: time="2025-11-06T00:28:14.112809428Z" level=info msg="CreateContainer within sandbox \"8c107ed5ca110b53944e4a3535c3fd80ae1516d0b749bd45da8bb6d1f5f4ed32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:28:14.138037 containerd[1622]: time="2025-11-06T00:28:14.137879564Z" level=info msg="CreateContainer within sandbox \"8320c20fb6822a3878a1981d4adabf529a9e4d601e9083824385a2697edeefdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2\"" Nov 6 00:28:14.138572 containerd[1622]: time="2025-11-06T00:28:14.138523694Z" level=info msg="StartContainer for \"67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2\"" Nov 6 00:28:14.141110 containerd[1622]: time="2025-11-06T00:28:14.141071535Z" level=info msg="connecting to shim 67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2" address="unix:///run/containerd/s/8e9e6147f5d66f851ee3b396c04c4bee463893b588fc949841d88c9565009f55" protocol=ttrpc version=3 Nov 6 00:28:14.149672 containerd[1622]: time="2025-11-06T00:28:14.149602997Z" level=info msg="Container 6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:14.171172 containerd[1622]: time="2025-11-06T00:28:14.171111750Z" level=info msg="CreateContainer within sandbox \"8c107ed5ca110b53944e4a3535c3fd80ae1516d0b749bd45da8bb6d1f5f4ed32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b\"" Nov 6 00:28:14.171172 containerd[1622]: time="2025-11-06T00:28:14.171986546Z" level=info msg="StartContainer for \"6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b\"" Nov 6 00:28:14.173449 containerd[1622]: time="2025-11-06T00:28:14.173340585Z" level=info msg="connecting to shim 6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b" address="unix:///run/containerd/s/e9830127055d214bff8d80f28ffdb8669c7edc4d5c6b4ff283c531917353ceb6" protocol=ttrpc version=3 Nov 6 00:28:14.175129 systemd[1]: Started cri-containerd-67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2.scope - libcontainer container 67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2. Nov 6 00:28:14.196162 systemd[1]: Started cri-containerd-6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b.scope - libcontainer container 6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b. Nov 6 00:28:14.244083 containerd[1622]: time="2025-11-06T00:28:14.243988171Z" level=info msg="StartContainer for \"6a4a3c16b6d9cf83e8d5cd170707c537d3ee169ed1f8e06115601441fe2ab14b\" returns successfully" Nov 6 00:28:14.244410 containerd[1622]: time="2025-11-06T00:28:14.244380306Z" level=info msg="StartContainer for \"67e748f4c805f1925e41f922aedf3223e6861d2a77b65cbe9f8b1ade49ed12a2\" returns successfully" Nov 6 00:28:15.211247 kubelet[2801]: E1106 00:28:15.211202 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:15.213167 kubelet[2801]: E1106 00:28:15.213146 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:15.884215 kubelet[2801]: I1106 00:28:15.884018 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lr6ct" podStartSLOduration=42.884000282 podStartE2EDuration="42.884000282s" podCreationTimestamp="2025-11-06 00:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:28:15.883550466 +0000 UTC m=+48.350005524" watchObservedRunningTime="2025-11-06 00:28:15.884000282 +0000 UTC m=+48.350455320" Nov 6 00:28:16.215723 kubelet[2801]: E1106 00:28:16.215341 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:16.215723 kubelet[2801]: E1106 00:28:16.215621 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:16.443338 kubelet[2801]: I1106 00:28:16.443255 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7zpzd" podStartSLOduration=43.44322663 podStartE2EDuration="43.44322663s" podCreationTimestamp="2025-11-06 00:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:28:16.441316164 +0000 UTC m=+48.907771212" watchObservedRunningTime="2025-11-06 00:28:16.44322663 +0000 UTC m=+48.909681668" Nov 6 00:28:17.217516 kubelet[2801]: E1106 00:28:17.217453 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:17.218080 kubelet[2801]: E1106 00:28:17.217551 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:17.457755 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:34140.service - OpenSSH per-connection server daemon (10.0.0.1:34140). Nov 6 00:28:17.534762 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 34140 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:17.536868 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:17.542744 systemd-logind[1591]: New session 8 of user core. Nov 6 00:28:17.550091 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:28:17.817661 sshd[4124]: Connection closed by 10.0.0.1 port 34140 Nov 6 00:28:17.818070 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:17.822179 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:34140.service: Deactivated successfully. Nov 6 00:28:17.824655 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:28:17.827627 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:28:17.828891 systemd-logind[1591]: Removed session 8. Nov 6 00:28:22.831290 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:37784.service - OpenSSH per-connection server daemon (10.0.0.1:37784). Nov 6 00:28:22.895124 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 37784 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:22.896529 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:22.902086 systemd-logind[1591]: New session 9 of user core. Nov 6 00:28:22.913103 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:28:23.043535 sshd[4146]: Connection closed by 10.0.0.1 port 37784 Nov 6 00:28:23.043945 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:23.049076 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:37784.service: Deactivated successfully. Nov 6 00:28:23.051715 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:28:23.052689 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:28:23.054580 systemd-logind[1591]: Removed session 9. Nov 6 00:28:28.059047 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:37788.service - OpenSSH per-connection server daemon (10.0.0.1:37788). Nov 6 00:28:28.113081 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 37788 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:28.114891 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:28.120132 systemd-logind[1591]: New session 10 of user core. Nov 6 00:28:28.130198 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:28:28.275545 sshd[4165]: Connection closed by 10.0.0.1 port 37788 Nov 6 00:28:28.275957 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:28.281054 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:37788.service: Deactivated successfully. Nov 6 00:28:28.283382 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:28:28.284631 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:28:28.286918 systemd-logind[1591]: Removed session 10. Nov 6 00:28:33.293607 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:53866.service - OpenSSH per-connection server daemon (10.0.0.1:53866). Nov 6 00:28:33.360211 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 53866 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:33.362653 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:33.368476 systemd-logind[1591]: New session 11 of user core. Nov 6 00:28:33.385130 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:28:33.718021 sshd[4182]: Connection closed by 10.0.0.1 port 53866 Nov 6 00:28:33.718418 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:33.723630 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:53866.service: Deactivated successfully. Nov 6 00:28:33.725958 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:28:33.726983 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:28:33.728270 systemd-logind[1591]: Removed session 11. Nov 6 00:28:38.538676 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:53870.service - OpenSSH per-connection server daemon (10.0.0.1:53870). Nov 6 00:28:38.606163 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 53870 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:38.611709 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:38.623800 systemd-logind[1591]: New session 12 of user core. Nov 6 00:28:38.637306 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:28:38.797251 sshd[4202]: Connection closed by 10.0.0.1 port 53870 Nov 6 00:28:38.799133 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:38.808293 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:53870.service: Deactivated successfully. Nov 6 00:28:38.810490 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:28:38.812617 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:28:38.815777 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:53884.service - OpenSSH per-connection server daemon (10.0.0.1:53884). Nov 6 00:28:38.817397 systemd-logind[1591]: Removed session 12. Nov 6 00:28:38.884182 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 53884 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:38.886451 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:38.892195 systemd-logind[1591]: New session 13 of user core. Nov 6 00:28:38.911559 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:28:39.259807 sshd[4220]: Connection closed by 10.0.0.1 port 53884 Nov 6 00:28:39.261094 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:39.290340 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:53884.service: Deactivated successfully. Nov 6 00:28:39.299284 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:28:39.302102 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:28:39.307349 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:53888.service - OpenSSH per-connection server daemon (10.0.0.1:53888). Nov 6 00:28:39.308786 systemd-logind[1591]: Removed session 13. Nov 6 00:28:39.379820 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 53888 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:39.382523 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:39.393280 systemd-logind[1591]: New session 14 of user core. Nov 6 00:28:39.408791 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:28:39.598713 sshd[4236]: Connection closed by 10.0.0.1 port 53888 Nov 6 00:28:39.599375 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:39.606902 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:53888.service: Deactivated successfully. Nov 6 00:28:39.611100 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:28:39.613362 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:28:39.616140 systemd-logind[1591]: Removed session 14. Nov 6 00:28:44.625932 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:36464.service - OpenSSH per-connection server daemon (10.0.0.1:36464). Nov 6 00:28:44.717228 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 36464 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:44.719788 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:44.731754 systemd-logind[1591]: New session 15 of user core. Nov 6 00:28:44.740394 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:28:44.891769 sshd[4253]: Connection closed by 10.0.0.1 port 36464 Nov 6 00:28:44.890837 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:44.897177 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:36464.service: Deactivated successfully. Nov 6 00:28:44.900491 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:28:44.905533 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:28:44.907343 systemd-logind[1591]: Removed session 15. Nov 6 00:28:49.918643 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:36480.service - OpenSSH per-connection server daemon (10.0.0.1:36480). Nov 6 00:28:50.018224 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 36480 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:50.025091 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:50.056181 systemd-logind[1591]: New session 16 of user core. Nov 6 00:28:50.075344 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:28:50.302359 sshd[4269]: Connection closed by 10.0.0.1 port 36480 Nov 6 00:28:50.306521 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:50.318416 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:36480.service: Deactivated successfully. Nov 6 00:28:50.324151 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:28:50.325858 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:28:50.333641 systemd-logind[1591]: Removed session 16. Nov 6 00:28:53.686632 kubelet[2801]: E1106 00:28:53.685218 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:54.691113 kubelet[2801]: E1106 00:28:54.691008 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:55.361265 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:52424.service - OpenSSH per-connection server daemon (10.0.0.1:52424). Nov 6 00:28:55.542280 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 52424 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:55.548179 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:55.578623 systemd-logind[1591]: New session 17 of user core. Nov 6 00:28:55.594221 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:28:55.686262 kubelet[2801]: E1106 00:28:55.686097 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:55.953038 sshd[4285]: Connection closed by 10.0.0.1 port 52424 Nov 6 00:28:55.952105 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:55.988248 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:52424.service: Deactivated successfully. Nov 6 00:28:55.992726 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:28:55.994449 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:28:56.002677 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:52438.service - OpenSSH per-connection server daemon (10.0.0.1:52438). Nov 6 00:28:56.004278 systemd-logind[1591]: Removed session 17. Nov 6 00:28:56.152773 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 52438 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:56.154680 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:56.177633 systemd-logind[1591]: New session 18 of user core. Nov 6 00:28:56.195627 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:28:56.683783 kubelet[2801]: E1106 00:28:56.683106 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:57.010945 sshd[4301]: Connection closed by 10.0.0.1 port 52438 Nov 6 00:28:57.015272 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:57.031997 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:52438.service: Deactivated successfully. Nov 6 00:28:57.035857 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:28:57.041244 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:28:57.046586 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:52442.service - OpenSSH per-connection server daemon (10.0.0.1:52442). Nov 6 00:28:57.048343 systemd-logind[1591]: Removed session 18. Nov 6 00:28:57.265237 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 52442 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:57.270780 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:57.291426 systemd-logind[1591]: New session 19 of user core. Nov 6 00:28:57.304519 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:28:58.955762 sshd[4317]: Connection closed by 10.0.0.1 port 52442 Nov 6 00:28:58.954462 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:58.976697 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:52442.service: Deactivated successfully. Nov 6 00:28:58.981175 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:28:58.982754 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:28:58.991110 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:52456.service - OpenSSH per-connection server daemon (10.0.0.1:52456). Nov 6 00:28:58.997064 systemd-logind[1591]: Removed session 19. Nov 6 00:28:59.121782 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 52456 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:28:59.124950 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:59.141691 systemd-logind[1591]: New session 20 of user core. Nov 6 00:28:59.158801 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:29:00.219659 sshd[4338]: Connection closed by 10.0.0.1 port 52456 Nov 6 00:29:00.220801 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:00.274335 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:52456.service: Deactivated successfully. Nov 6 00:29:00.288684 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:29:00.307672 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:29:00.315269 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:58606.service - OpenSSH per-connection server daemon (10.0.0.1:58606). Nov 6 00:29:00.328214 systemd-logind[1591]: Removed session 20. Nov 6 00:29:00.441897 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 58606 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:00.449387 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:00.465306 systemd-logind[1591]: New session 21 of user core. Nov 6 00:29:00.488240 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:29:00.868538 sshd[4352]: Connection closed by 10.0.0.1 port 58606 Nov 6 00:29:00.867970 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:00.885970 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:58606.service: Deactivated successfully. Nov 6 00:29:00.891929 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:29:00.893321 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:29:00.896326 systemd-logind[1591]: Removed session 21. Nov 6 00:29:05.885959 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:58618.service - OpenSSH per-connection server daemon (10.0.0.1:58618). Nov 6 00:29:05.943547 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 58618 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:05.945397 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:05.950605 systemd-logind[1591]: New session 22 of user core. Nov 6 00:29:05.967009 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:29:06.134229 sshd[4369]: Connection closed by 10.0.0.1 port 58618 Nov 6 00:29:06.134574 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:06.139107 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:58618.service: Deactivated successfully. Nov 6 00:29:06.141311 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:29:06.142391 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:29:06.143863 systemd-logind[1591]: Removed session 22. Nov 6 00:29:11.149119 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:50236.service - OpenSSH per-connection server daemon (10.0.0.1:50236). Nov 6 00:29:11.224539 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 50236 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:11.227059 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:11.244238 systemd-logind[1591]: New session 23 of user core. Nov 6 00:29:11.255213 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:29:11.467118 sshd[4389]: Connection closed by 10.0.0.1 port 50236 Nov 6 00:29:11.467419 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:11.473638 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:50236.service: Deactivated successfully. Nov 6 00:29:11.476600 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:29:11.477757 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:29:11.479469 systemd-logind[1591]: Removed session 23. Nov 6 00:29:16.486313 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:50242.service - OpenSSH per-connection server daemon (10.0.0.1:50242). Nov 6 00:29:16.543577 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:16.545121 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:16.550123 systemd-logind[1591]: New session 24 of user core. Nov 6 00:29:16.564020 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:29:16.672763 sshd[4405]: Connection closed by 10.0.0.1 port 50242 Nov 6 00:29:16.673148 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:16.677710 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:50242.service: Deactivated successfully. Nov 6 00:29:16.679768 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:29:16.680799 systemd-logind[1591]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:29:16.682326 systemd-logind[1591]: Removed session 24. Nov 6 00:29:20.687683 kubelet[2801]: E1106 00:29:20.687626 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:21.691002 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:37866.service - OpenSSH per-connection server daemon (10.0.0.1:37866). Nov 6 00:29:21.757791 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 37866 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:21.759818 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:21.765232 systemd-logind[1591]: New session 25 of user core. Nov 6 00:29:21.775026 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:29:21.905583 sshd[4421]: Connection closed by 10.0.0.1 port 37866 Nov 6 00:29:21.907486 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:21.923993 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:37866.service: Deactivated successfully. Nov 6 00:29:21.927385 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:29:21.937044 systemd-logind[1591]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:29:21.946243 systemd[1]: Started sshd@25-10.0.0.88:22-10.0.0.1:37882.service - OpenSSH per-connection server daemon (10.0.0.1:37882). Nov 6 00:29:21.950866 systemd-logind[1591]: Removed session 25. Nov 6 00:29:22.053088 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 37882 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:22.055094 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:22.063806 systemd-logind[1591]: New session 26 of user core. Nov 6 00:29:22.079205 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:29:23.446555 containerd[1622]: time="2025-11-06T00:29:23.446267598Z" level=info msg="StopContainer for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" with timeout 30 (s)" Nov 6 00:29:23.458778 containerd[1622]: time="2025-11-06T00:29:23.458724170Z" level=info msg="Stop container \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" with signal terminated" Nov 6 00:29:23.474429 systemd[1]: cri-containerd-5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6.scope: Deactivated successfully. Nov 6 00:29:23.479500 containerd[1622]: time="2025-11-06T00:29:23.479324104Z" level=info msg="received exit event container_id:\"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" id:\"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" pid:3409 exited_at:{seconds:1762388963 nanos:475512509}" Nov 6 00:29:23.481923 containerd[1622]: time="2025-11-06T00:29:23.480789931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" id:\"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" pid:3409 exited_at:{seconds:1762388963 nanos:475512509}" Nov 6 00:29:23.520853 containerd[1622]: time="2025-11-06T00:29:23.520710654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" id:\"28ca75d9a9a83699978f85d6071d45f3982b6b6b306aef04c9100f89dd8ef92c\" pid:4462 exited_at:{seconds:1762388963 nanos:489893194}" Nov 6 00:29:23.528708 containerd[1622]: time="2025-11-06T00:29:23.528421208Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:29:23.533350 containerd[1622]: time="2025-11-06T00:29:23.533291821Z" level=info msg="StopContainer for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" with timeout 2 (s)" Nov 6 00:29:23.533928 containerd[1622]: time="2025-11-06T00:29:23.533884680Z" level=info msg="Stop container \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" with signal terminated" Nov 6 00:29:23.543078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6-rootfs.mount: Deactivated successfully. Nov 6 00:29:23.548131 systemd-networkd[1517]: lxc_health: Link DOWN Nov 6 00:29:23.548152 systemd-networkd[1517]: lxc_health: Lost carrier Nov 6 00:29:23.568290 containerd[1622]: time="2025-11-06T00:29:23.568218306Z" level=info msg="StopContainer for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" returns successfully" Nov 6 00:29:23.570544 systemd[1]: cri-containerd-d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9.scope: Deactivated successfully. Nov 6 00:29:23.571337 systemd[1]: cri-containerd-d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9.scope: Consumed 7.827s CPU time, 123.8M memory peak, 220K read from disk, 13.3M written to disk. Nov 6 00:29:23.572234 containerd[1622]: time="2025-11-06T00:29:23.572171788Z" level=info msg="StopPodSandbox for \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\"" Nov 6 00:29:23.572325 containerd[1622]: time="2025-11-06T00:29:23.572296174Z" level=info msg="Container to stop \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:29:23.573323 containerd[1622]: time="2025-11-06T00:29:23.573221891Z" level=info msg="received exit event container_id:\"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" id:\"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" pid:3444 exited_at:{seconds:1762388963 nanos:572869927}" Nov 6 00:29:23.573790 containerd[1622]: time="2025-11-06T00:29:23.573414144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" id:\"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" pid:3444 exited_at:{seconds:1762388963 nanos:572869927}" Nov 6 00:29:23.582721 systemd[1]: cri-containerd-ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1.scope: Deactivated successfully. Nov 6 00:29:23.584925 containerd[1622]: time="2025-11-06T00:29:23.584869908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" id:\"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" pid:3005 exit_status:137 exited_at:{seconds:1762388963 nanos:584533513}" Nov 6 00:29:23.607887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9-rootfs.mount: Deactivated successfully. Nov 6 00:29:23.627101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1-rootfs.mount: Deactivated successfully. Nov 6 00:29:23.633685 containerd[1622]: time="2025-11-06T00:29:23.633621709Z" level=info msg="StopContainer for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" returns successfully" Nov 6 00:29:23.634567 containerd[1622]: time="2025-11-06T00:29:23.634516227Z" level=info msg="StopPodSandbox for \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\"" Nov 6 00:29:23.634719 containerd[1622]: time="2025-11-06T00:29:23.634601950Z" level=info msg="Container to stop \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:29:23.634719 containerd[1622]: time="2025-11-06T00:29:23.634618731Z" level=info msg="Container to stop \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:29:23.634719 containerd[1622]: time="2025-11-06T00:29:23.634630574Z" level=info msg="Container to stop \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:29:23.634719 containerd[1622]: time="2025-11-06T00:29:23.634641514Z" level=info msg="Container to stop \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:29:23.634719 containerd[1622]: time="2025-11-06T00:29:23.634652425Z" level=info msg="Container to stop \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:29:23.642213 systemd[1]: cri-containerd-2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba.scope: Deactivated successfully. Nov 6 00:29:23.666057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba-rootfs.mount: Deactivated successfully. Nov 6 00:29:23.729227 containerd[1622]: time="2025-11-06T00:29:23.728447007Z" level=info msg="shim disconnected" id=ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1 namespace=k8s.io Nov 6 00:29:23.729227 containerd[1622]: time="2025-11-06T00:29:23.728486622Z" level=warning msg="cleaning up after shim disconnected" id=ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1 namespace=k8s.io Nov 6 00:29:23.729227 containerd[1622]: time="2025-11-06T00:29:23.728496491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:29:23.764101 containerd[1622]: time="2025-11-06T00:29:23.763813544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" id:\"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" pid:2962 exit_status:137 exited_at:{seconds:1762388963 nanos:643665086}" Nov 6 00:29:23.764101 containerd[1622]: time="2025-11-06T00:29:23.763971472Z" level=info msg="received exit event sandbox_id:\"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" exit_status:137 exited_at:{seconds:1762388963 nanos:584533513}" Nov 6 00:29:23.766725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1-shm.mount: Deactivated successfully. Nov 6 00:29:23.769004 containerd[1622]: time="2025-11-06T00:29:23.768964006Z" level=info msg="TearDown network for sandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" successfully" Nov 6 00:29:23.769004 containerd[1622]: time="2025-11-06T00:29:23.768994773Z" level=info msg="StopPodSandbox for \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" returns successfully" Nov 6 00:29:23.799705 containerd[1622]: time="2025-11-06T00:29:23.799407119Z" level=info msg="shim disconnected" id=2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba namespace=k8s.io Nov 6 00:29:23.799705 containerd[1622]: time="2025-11-06T00:29:23.799446593Z" level=warning msg="cleaning up after shim disconnected" id=2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba namespace=k8s.io Nov 6 00:29:23.799705 containerd[1622]: time="2025-11-06T00:29:23.799455840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:29:23.799705 containerd[1622]: time="2025-11-06T00:29:23.799571669Z" level=info msg="received exit event sandbox_id:\"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" exit_status:137 exited_at:{seconds:1762388963 nanos:643665086}" Nov 6 00:29:23.824476 containerd[1622]: time="2025-11-06T00:29:23.824367543Z" level=info msg="TearDown network for sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" successfully" Nov 6 00:29:23.824476 containerd[1622]: time="2025-11-06T00:29:23.824425162Z" level=info msg="StopPodSandbox for \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" returns successfully" Nov 6 00:29:23.938808 kubelet[2801]: I1106 00:29:23.938731 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0241b02-81f5-4904-8d9e-271d2b368ed2-cilium-config-path\") pod \"f0241b02-81f5-4904-8d9e-271d2b368ed2\" (UID: \"f0241b02-81f5-4904-8d9e-271d2b368ed2\") " Nov 6 00:29:23.938808 kubelet[2801]: I1106 00:29:23.938792 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-hubble-tls\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.938808 kubelet[2801]: I1106 00:29:23.938818 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/101cc7b4-5405-41fa-89c4-1c62c65bedce-clustermesh-secrets\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939513 kubelet[2801]: I1106 00:29:23.938866 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-config-path\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939513 kubelet[2801]: I1106 00:29:23.938885 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qc69\" (UniqueName: \"kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-kube-api-access-7qc69\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939513 kubelet[2801]: I1106 00:29:23.938900 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-lib-modules\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939513 kubelet[2801]: I1106 00:29:23.938913 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-run\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939513 kubelet[2801]: I1106 00:29:23.938927 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-bpf-maps\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939513 kubelet[2801]: I1106 00:29:23.938941 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-hostproc\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939754 kubelet[2801]: I1106 00:29:23.938956 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-kernel\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939754 kubelet[2801]: I1106 00:29:23.938969 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cni-path\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939754 kubelet[2801]: I1106 00:29:23.938986 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-etc-cni-netd\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939754 kubelet[2801]: I1106 00:29:23.939002 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-net\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939754 kubelet[2801]: I1106 00:29:23.939020 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-xtables-lock\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.939754 kubelet[2801]: I1106 00:29:23.939035 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-cgroup\") pod \"101cc7b4-5405-41fa-89c4-1c62c65bedce\" (UID: \"101cc7b4-5405-41fa-89c4-1c62c65bedce\") " Nov 6 00:29:23.940013 kubelet[2801]: I1106 00:29:23.939052 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x582\" (UniqueName: \"kubernetes.io/projected/f0241b02-81f5-4904-8d9e-271d2b368ed2-kube-api-access-7x582\") pod \"f0241b02-81f5-4904-8d9e-271d2b368ed2\" (UID: \"f0241b02-81f5-4904-8d9e-271d2b368ed2\") " Nov 6 00:29:23.940013 kubelet[2801]: I1106 00:29:23.939546 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-hostproc" (OuterVolumeSpecName: "hostproc") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.940769 kubelet[2801]: I1106 00:29:23.940738 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.941020 kubelet[2801]: I1106 00:29:23.940916 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.942893 kubelet[2801]: I1106 00:29:23.940939 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.942996 kubelet[2801]: I1106 00:29:23.940960 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.943090 kubelet[2801]: I1106 00:29:23.941009 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.943180 kubelet[2801]: I1106 00:29:23.941036 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cni-path" (OuterVolumeSpecName: "cni-path") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.943262 kubelet[2801]: I1106 00:29:23.941059 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.943381 kubelet[2801]: I1106 00:29:23.941208 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.943381 kubelet[2801]: I1106 00:29:23.941236 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:29:23.943381 kubelet[2801]: I1106 00:29:23.943020 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101cc7b4-5405-41fa-89c4-1c62c65bedce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:29:23.944116 kubelet[2801]: I1106 00:29:23.944076 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:29:23.946029 kubelet[2801]: I1106 00:29:23.945991 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0241b02-81f5-4904-8d9e-271d2b368ed2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0241b02-81f5-4904-8d9e-271d2b368ed2" (UID: "f0241b02-81f5-4904-8d9e-271d2b368ed2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:29:23.946552 kubelet[2801]: I1106 00:29:23.946518 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-kube-api-access-7qc69" (OuterVolumeSpecName: "kube-api-access-7qc69") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "kube-api-access-7qc69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:29:23.947071 kubelet[2801]: I1106 00:29:23.947043 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0241b02-81f5-4904-8d9e-271d2b368ed2-kube-api-access-7x582" (OuterVolumeSpecName: "kube-api-access-7x582") pod "f0241b02-81f5-4904-8d9e-271d2b368ed2" (UID: "f0241b02-81f5-4904-8d9e-271d2b368ed2"). InnerVolumeSpecName "kube-api-access-7x582". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:29:23.947548 kubelet[2801]: I1106 00:29:23.947512 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "101cc7b4-5405-41fa-89c4-1c62c65bedce" (UID: "101cc7b4-5405-41fa-89c4-1c62c65bedce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039864 2801 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039909 2801 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039922 2801 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039930 2801 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039940 2801 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7qc69\" (UniqueName: \"kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-kube-api-access-7qc69\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039951 2801 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039962 2801 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040018 kubelet[2801]: I1106 00:29:24.039973 2801 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.039984 2801 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.039994 2801 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.040004 2801 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.040015 2801 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7x582\" (UniqueName: \"kubernetes.io/projected/f0241b02-81f5-4904-8d9e-271d2b368ed2-kube-api-access-7x582\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.040025 2801 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0241b02-81f5-4904-8d9e-271d2b368ed2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.040037 2801 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/101cc7b4-5405-41fa-89c4-1c62c65bedce-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.040047 2801 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/101cc7b4-5405-41fa-89c4-1c62c65bedce-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.040334 kubelet[2801]: I1106 00:29:24.040058 2801 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/101cc7b4-5405-41fa-89c4-1c62c65bedce-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:29:24.536851 kubelet[2801]: I1106 00:29:24.536771 2801 scope.go:117] "RemoveContainer" containerID="5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6" Nov 6 00:29:24.539459 containerd[1622]: time="2025-11-06T00:29:24.539396967Z" level=info msg="RemoveContainer for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\"" Nov 6 00:29:24.541993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba-shm.mount: Deactivated successfully. Nov 6 00:29:24.542747 systemd[1]: var-lib-kubelet-pods-f0241b02\x2d81f5\x2d4904\x2d8d9e\x2d271d2b368ed2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7x582.mount: Deactivated successfully. Nov 6 00:29:24.542947 systemd[1]: var-lib-kubelet-pods-101cc7b4\x2d5405\x2d41fa\x2d89c4\x2d1c62c65bedce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7qc69.mount: Deactivated successfully. Nov 6 00:29:24.543039 systemd[1]: var-lib-kubelet-pods-101cc7b4\x2d5405\x2d41fa\x2d89c4\x2d1c62c65bedce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 00:29:24.543120 systemd[1]: var-lib-kubelet-pods-101cc7b4\x2d5405\x2d41fa\x2d89c4\x2d1c62c65bedce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 00:29:24.548919 systemd[1]: Removed slice kubepods-besteffort-podf0241b02_81f5_4904_8d9e_271d2b368ed2.slice - libcontainer container kubepods-besteffort-podf0241b02_81f5_4904_8d9e_271d2b368ed2.slice. Nov 6 00:29:24.550533 systemd[1]: Removed slice kubepods-burstable-pod101cc7b4_5405_41fa_89c4_1c62c65bedce.slice - libcontainer container kubepods-burstable-pod101cc7b4_5405_41fa_89c4_1c62c65bedce.slice. Nov 6 00:29:24.550632 systemd[1]: kubepods-burstable-pod101cc7b4_5405_41fa_89c4_1c62c65bedce.slice: Consumed 7.987s CPU time, 124.1M memory peak, 224K read from disk, 13.3M written to disk. Nov 6 00:29:24.551030 containerd[1622]: time="2025-11-06T00:29:24.550888496Z" level=info msg="RemoveContainer for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" returns successfully" Nov 6 00:29:24.552148 kubelet[2801]: I1106 00:29:24.552098 2801 scope.go:117] "RemoveContainer" containerID="5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6" Nov 6 00:29:24.552565 containerd[1622]: time="2025-11-06T00:29:24.552493595Z" level=error msg="ContainerStatus for \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\": not found" Nov 6 00:29:24.552708 kubelet[2801]: E1106 00:29:24.552662 2801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\": not found" containerID="5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6" Nov 6 00:29:24.552785 kubelet[2801]: I1106 00:29:24.552692 2801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6"} err="failed to get container status \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cac468d23a7db8ab7f12859bcf00fc4db3832d08f013e6faee589522e5c14e6\": not found" Nov 6 00:29:24.552785 kubelet[2801]: I1106 00:29:24.552778 2801 scope.go:117] "RemoveContainer" containerID="d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9" Nov 6 00:29:24.554808 containerd[1622]: time="2025-11-06T00:29:24.554779220Z" level=info msg="RemoveContainer for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\"" Nov 6 00:29:24.560318 containerd[1622]: time="2025-11-06T00:29:24.560270323Z" level=info msg="RemoveContainer for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" returns successfully" Nov 6 00:29:24.560573 kubelet[2801]: I1106 00:29:24.560470 2801 scope.go:117] "RemoveContainer" containerID="e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba" Nov 6 00:29:24.562570 containerd[1622]: time="2025-11-06T00:29:24.562498289Z" level=info msg="RemoveContainer for \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\"" Nov 6 00:29:24.567300 containerd[1622]: time="2025-11-06T00:29:24.567273781Z" level=info msg="RemoveContainer for \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" returns successfully" Nov 6 00:29:24.567473 kubelet[2801]: I1106 00:29:24.567438 2801 scope.go:117] "RemoveContainer" containerID="f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9" Nov 6 00:29:24.570451 containerd[1622]: time="2025-11-06T00:29:24.570420270Z" level=info msg="RemoveContainer for \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\"" Nov 6 00:29:24.585940 containerd[1622]: time="2025-11-06T00:29:24.585872726Z" level=info msg="RemoveContainer for \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" returns successfully" Nov 6 00:29:24.586245 kubelet[2801]: I1106 00:29:24.586191 2801 scope.go:117] "RemoveContainer" containerID="d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557" Nov 6 00:29:24.588760 containerd[1622]: time="2025-11-06T00:29:24.588714699Z" level=info msg="RemoveContainer for \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\"" Nov 6 00:29:24.594105 containerd[1622]: time="2025-11-06T00:29:24.594053756Z" level=info msg="RemoveContainer for \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" returns successfully" Nov 6 00:29:24.594322 kubelet[2801]: I1106 00:29:24.594293 2801 scope.go:117] "RemoveContainer" containerID="36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011" Nov 6 00:29:24.595772 containerd[1622]: time="2025-11-06T00:29:24.595728067Z" level=info msg="RemoveContainer for \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\"" Nov 6 00:29:24.600187 containerd[1622]: time="2025-11-06T00:29:24.600114125Z" level=info msg="RemoveContainer for \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" returns successfully" Nov 6 00:29:24.600413 kubelet[2801]: I1106 00:29:24.600382 2801 scope.go:117] "RemoveContainer" containerID="d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9" Nov 6 00:29:24.600670 containerd[1622]: time="2025-11-06T00:29:24.600607987Z" level=error msg="ContainerStatus for \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\": not found" Nov 6 00:29:24.600865 kubelet[2801]: E1106 00:29:24.600803 2801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\": not found" containerID="d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9" Nov 6 00:29:24.600905 kubelet[2801]: I1106 00:29:24.600875 2801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9"} err="failed to get container status \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d427d059c602876623dd9c4d0a3dcc7f7e5515b8f1eb115cda9c1b7cd257fbd9\": not found" Nov 6 00:29:24.600947 kubelet[2801]: I1106 00:29:24.600911 2801 scope.go:117] "RemoveContainer" containerID="e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba" Nov 6 00:29:24.601244 containerd[1622]: time="2025-11-06T00:29:24.601180707Z" level=error msg="ContainerStatus for \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\": not found" Nov 6 00:29:24.601394 kubelet[2801]: E1106 00:29:24.601336 2801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\": not found" containerID="e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba" Nov 6 00:29:24.601394 kubelet[2801]: I1106 00:29:24.601364 2801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba"} err="failed to get container status \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8656a75f3b69ddc2b999ce9e88114d86a05e0629b86fc4326422783eb69beba\": not found" Nov 6 00:29:24.601394 kubelet[2801]: I1106 00:29:24.601384 2801 scope.go:117] "RemoveContainer" containerID="f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9" Nov 6 00:29:24.601568 containerd[1622]: time="2025-11-06T00:29:24.601538263Z" level=error msg="ContainerStatus for \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\": not found" Nov 6 00:29:24.601677 kubelet[2801]: E1106 00:29:24.601650 2801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\": not found" containerID="f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9" Nov 6 00:29:24.601715 kubelet[2801]: I1106 00:29:24.601677 2801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9"} err="failed to get container status \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f23c4df0398328fa58686422a972c2cd03491f1c90f3d3fe16a3adcbc85febf9\": not found" Nov 6 00:29:24.601715 kubelet[2801]: I1106 00:29:24.601699 2801 scope.go:117] "RemoveContainer" containerID="d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557" Nov 6 00:29:24.601926 containerd[1622]: time="2025-11-06T00:29:24.601891870Z" level=error msg="ContainerStatus for \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\": not found" Nov 6 00:29:24.602029 kubelet[2801]: E1106 00:29:24.602004 2801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\": not found" containerID="d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557" Nov 6 00:29:24.602090 kubelet[2801]: I1106 00:29:24.602026 2801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557"} err="failed to get container status \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\": rpc error: code = NotFound desc = an error occurred when try to find container \"d76aba10b5de76e0f367ab4bf62a05e09e0f9f0be908dde8a527a889aae44557\": not found" Nov 6 00:29:24.602090 kubelet[2801]: I1106 00:29:24.602040 2801 scope.go:117] "RemoveContainer" containerID="36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011" Nov 6 00:29:24.602269 containerd[1622]: time="2025-11-06T00:29:24.602233154Z" level=error msg="ContainerStatus for \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\": not found" Nov 6 00:29:24.602409 kubelet[2801]: E1106 00:29:24.602366 2801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\": not found" containerID="36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011" Nov 6 00:29:24.602453 kubelet[2801]: I1106 00:29:24.602405 2801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011"} err="failed to get container status \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\": rpc error: code = NotFound desc = an error occurred when try to find container \"36b30600507949429c8f96080dfdd1c26ff9bca66ac9115c7f3a4570ae7af011\": not found" Nov 6 00:29:25.413564 sshd[4437]: Connection closed by 10.0.0.1 port 37882 Nov 6 00:29:25.414060 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:25.419649 systemd[1]: sshd@25-10.0.0.88:22-10.0.0.1:37882.service: Deactivated successfully. Nov 6 00:29:25.422107 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:29:25.423095 systemd-logind[1591]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:29:25.424683 systemd-logind[1591]: Removed session 26. Nov 6 00:29:25.459399 systemd[1]: Started sshd@26-10.0.0.88:22-10.0.0.1:37896.service - OpenSSH per-connection server daemon (10.0.0.1:37896). Nov 6 00:29:25.529093 sshd[4590]: Accepted publickey for core from 10.0.0.1 port 37896 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:25.530917 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:25.535902 systemd-logind[1591]: New session 27 of user core. Nov 6 00:29:25.547085 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:29:25.685704 kubelet[2801]: I1106 00:29:25.685559 2801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101cc7b4-5405-41fa-89c4-1c62c65bedce" path="/var/lib/kubelet/pods/101cc7b4-5405-41fa-89c4-1c62c65bedce/volumes" Nov 6 00:29:25.686680 kubelet[2801]: I1106 00:29:25.686656 2801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0241b02-81f5-4904-8d9e-271d2b368ed2" path="/var/lib/kubelet/pods/f0241b02-81f5-4904-8d9e-271d2b368ed2/volumes" Nov 6 00:29:26.228158 sshd[4593]: Connection closed by 10.0.0.1 port 37896 Nov 6 00:29:26.231112 sshd-session[4590]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:26.240687 systemd[1]: sshd@26-10.0.0.88:22-10.0.0.1:37896.service: Deactivated successfully. Nov 6 00:29:26.244803 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:29:26.248646 systemd-logind[1591]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:29:26.254132 systemd[1]: Started sshd@27-10.0.0.88:22-10.0.0.1:37906.service - OpenSSH per-connection server daemon (10.0.0.1:37906). Nov 6 00:29:26.256904 systemd-logind[1591]: Removed session 27. Nov 6 00:29:26.258150 kubelet[2801]: I1106 00:29:26.257875 2801 memory_manager.go:355] "RemoveStaleState removing state" podUID="f0241b02-81f5-4904-8d9e-271d2b368ed2" containerName="cilium-operator" Nov 6 00:29:26.258150 kubelet[2801]: I1106 00:29:26.257913 2801 memory_manager.go:355] "RemoveStaleState removing state" podUID="101cc7b4-5405-41fa-89c4-1c62c65bedce" containerName="cilium-agent" Nov 6 00:29:26.272990 systemd[1]: Created slice kubepods-burstable-pod7d75d120_5cd7_48b6_9329_ea69da2fe788.slice - libcontainer container kubepods-burstable-pod7d75d120_5cd7_48b6_9329_ea69da2fe788.slice. Nov 6 00:29:26.338628 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 37906 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:26.340690 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:26.347692 systemd-logind[1591]: New session 28 of user core. Nov 6 00:29:26.354201 kubelet[2801]: I1106 00:29:26.354145 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-hostproc\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354201 kubelet[2801]: I1106 00:29:26.354198 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-etc-cni-netd\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354387 kubelet[2801]: I1106 00:29:26.354220 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d75d120-5cd7-48b6-9329-ea69da2fe788-clustermesh-secrets\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354387 kubelet[2801]: I1106 00:29:26.354237 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-lib-modules\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354387 kubelet[2801]: I1106 00:29:26.354253 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d75d120-5cd7-48b6-9329-ea69da2fe788-hubble-tls\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354387 kubelet[2801]: I1106 00:29:26.354268 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bqw\" (UniqueName: \"kubernetes.io/projected/7d75d120-5cd7-48b6-9329-ea69da2fe788-kube-api-access-74bqw\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354387 kubelet[2801]: I1106 00:29:26.354290 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-cilium-run\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354387 kubelet[2801]: I1106 00:29:26.354305 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-bpf-maps\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354520 kubelet[2801]: I1106 00:29:26.354418 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d75d120-5cd7-48b6-9329-ea69da2fe788-cilium-config-path\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354578 kubelet[2801]: I1106 00:29:26.354533 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-host-proc-sys-net\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354656 kubelet[2801]: I1106 00:29:26.354591 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-cilium-cgroup\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354656 kubelet[2801]: I1106 00:29:26.354650 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-cni-path\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354656 kubelet[2801]: I1106 00:29:26.354671 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d75d120-5cd7-48b6-9329-ea69da2fe788-cilium-ipsec-secrets\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354895 kubelet[2801]: I1106 00:29:26.354696 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-xtables-lock\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.354895 kubelet[2801]: I1106 00:29:26.354726 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d75d120-5cd7-48b6-9329-ea69da2fe788-host-proc-sys-kernel\") pod \"cilium-phnnl\" (UID: \"7d75d120-5cd7-48b6-9329-ea69da2fe788\") " pod="kube-system/cilium-phnnl" Nov 6 00:29:26.356116 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 00:29:26.411860 sshd[4610]: Connection closed by 10.0.0.1 port 37906 Nov 6 00:29:26.412380 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:26.426817 systemd[1]: sshd@27-10.0.0.88:22-10.0.0.1:37906.service: Deactivated successfully. Nov 6 00:29:26.429595 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 00:29:26.430606 systemd-logind[1591]: Session 28 logged out. Waiting for processes to exit. Nov 6 00:29:26.435324 systemd[1]: Started sshd@28-10.0.0.88:22-10.0.0.1:37910.service - OpenSSH per-connection server daemon (10.0.0.1:37910). Nov 6 00:29:26.436251 systemd-logind[1591]: Removed session 28. Nov 6 00:29:26.505059 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 37910 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:26.507296 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:26.512768 systemd-logind[1591]: New session 29 of user core. Nov 6 00:29:26.527177 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 6 00:29:26.579354 kubelet[2801]: E1106 00:29:26.579268 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:26.580196 containerd[1622]: time="2025-11-06T00:29:26.580049692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-phnnl,Uid:7d75d120-5cd7-48b6-9329-ea69da2fe788,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:26.829854 containerd[1622]: time="2025-11-06T00:29:26.829737328Z" level=info msg="connecting to shim 328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638" address="unix:///run/containerd/s/c6b821f316bbf8b98bc00f98918a24fb6429e64459424b8dedfbb7ca249a7c4f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:26.866135 systemd[1]: Started cri-containerd-328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638.scope - libcontainer container 328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638. Nov 6 00:29:26.907904 containerd[1622]: time="2025-11-06T00:29:26.907853861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-phnnl,Uid:7d75d120-5cd7-48b6-9329-ea69da2fe788,Namespace:kube-system,Attempt:0,} returns sandbox id \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\"" Nov 6 00:29:26.908678 kubelet[2801]: E1106 00:29:26.908651 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:26.910772 containerd[1622]: time="2025-11-06T00:29:26.910733145Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:29:26.941875 containerd[1622]: time="2025-11-06T00:29:26.941797566Z" level=info msg="Container ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:27.036596 containerd[1622]: time="2025-11-06T00:29:27.036532386Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\"" Nov 6 00:29:27.037171 containerd[1622]: time="2025-11-06T00:29:27.037136746Z" level=info msg="StartContainer for \"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\"" Nov 6 00:29:27.038197 containerd[1622]: time="2025-11-06T00:29:27.038169074Z" level=info msg="connecting to shim ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd" address="unix:///run/containerd/s/c6b821f316bbf8b98bc00f98918a24fb6429e64459424b8dedfbb7ca249a7c4f" protocol=ttrpc version=3 Nov 6 00:29:27.070053 systemd[1]: Started cri-containerd-ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd.scope - libcontainer container ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd. Nov 6 00:29:27.101968 containerd[1622]: time="2025-11-06T00:29:27.101268711Z" level=info msg="StartContainer for \"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\" returns successfully" Nov 6 00:29:27.113169 systemd[1]: cri-containerd-ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd.scope: Deactivated successfully. Nov 6 00:29:27.115883 containerd[1622]: time="2025-11-06T00:29:27.115793889Z" level=info msg="received exit event container_id:\"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\" id:\"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\" pid:4690 exited_at:{seconds:1762388967 nanos:115466501}" Nov 6 00:29:27.115980 containerd[1622]: time="2025-11-06T00:29:27.115927842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\" id:\"ed0c2ae7a70b6ae178636b78800eb1e66a8f2e984efe5eab913086332b12a7cd\" pid:4690 exited_at:{seconds:1762388967 nanos:115466501}" Nov 6 00:29:27.554522 kubelet[2801]: E1106 00:29:27.554273 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:27.557557 containerd[1622]: time="2025-11-06T00:29:27.557285977Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:29:27.578855 containerd[1622]: time="2025-11-06T00:29:27.578600387Z" level=info msg="Container f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:27.582734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779637982.mount: Deactivated successfully. Nov 6 00:29:27.586290 containerd[1622]: time="2025-11-06T00:29:27.586247497Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\"" Nov 6 00:29:27.587849 containerd[1622]: time="2025-11-06T00:29:27.586979507Z" level=info msg="StartContainer for \"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\"" Nov 6 00:29:27.588232 containerd[1622]: time="2025-11-06T00:29:27.588175284Z" level=info msg="connecting to shim f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f" address="unix:///run/containerd/s/c6b821f316bbf8b98bc00f98918a24fb6429e64459424b8dedfbb7ca249a7c4f" protocol=ttrpc version=3 Nov 6 00:29:27.615153 systemd[1]: Started cri-containerd-f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f.scope - libcontainer container f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f. Nov 6 00:29:27.652381 containerd[1622]: time="2025-11-06T00:29:27.652314412Z" level=info msg="StartContainer for \"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\" returns successfully" Nov 6 00:29:27.661423 systemd[1]: cri-containerd-f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f.scope: Deactivated successfully. Nov 6 00:29:27.663183 containerd[1622]: time="2025-11-06T00:29:27.663124471Z" level=info msg="received exit event container_id:\"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\" id:\"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\" pid:4734 exited_at:{seconds:1762388967 nanos:662808314}" Nov 6 00:29:27.663305 containerd[1622]: time="2025-11-06T00:29:27.663253614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\" id:\"f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f\" pid:4734 exited_at:{seconds:1762388967 nanos:662808314}" Nov 6 00:29:27.688310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f43f4a8454d36e32def02fa258400f9cf73a54ae9cbf4859d8a7c24f93609a1f-rootfs.mount: Deactivated successfully. Nov 6 00:29:27.693115 containerd[1622]: time="2025-11-06T00:29:27.693053616Z" level=info msg="StopPodSandbox for \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\"" Nov 6 00:29:27.693335 containerd[1622]: time="2025-11-06T00:29:27.693248874Z" level=info msg="TearDown network for sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" successfully" Nov 6 00:29:27.693335 containerd[1622]: time="2025-11-06T00:29:27.693263191Z" level=info msg="StopPodSandbox for \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" returns successfully" Nov 6 00:29:27.693689 containerd[1622]: time="2025-11-06T00:29:27.693660902Z" level=info msg="RemovePodSandbox for \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\"" Nov 6 00:29:27.693749 containerd[1622]: time="2025-11-06T00:29:27.693695927Z" level=info msg="Forcibly stopping sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\"" Nov 6 00:29:27.693785 containerd[1622]: time="2025-11-06T00:29:27.693762994Z" level=info msg="TearDown network for sandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" successfully" Nov 6 00:29:27.695766 containerd[1622]: time="2025-11-06T00:29:27.695722251Z" level=info msg="Ensure that sandbox 2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba in task-service has been cleanup successfully" Nov 6 00:29:27.700005 containerd[1622]: time="2025-11-06T00:29:27.699952544Z" level=info msg="RemovePodSandbox \"2e9feb55db54399baacf5edd543db65e977f12ec14ebff5008410157cce8d4ba\" returns successfully" Nov 6 00:29:27.700625 containerd[1622]: time="2025-11-06T00:29:27.700585217Z" level=info msg="StopPodSandbox for \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\"" Nov 6 00:29:27.700779 containerd[1622]: time="2025-11-06T00:29:27.700753795Z" level=info msg="TearDown network for sandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" successfully" Nov 6 00:29:27.700779 containerd[1622]: time="2025-11-06T00:29:27.700772550Z" level=info msg="StopPodSandbox for \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" returns successfully" Nov 6 00:29:27.701152 containerd[1622]: time="2025-11-06T00:29:27.701119716Z" level=info msg="RemovePodSandbox for \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\"" Nov 6 00:29:27.701215 containerd[1622]: time="2025-11-06T00:29:27.701162687Z" level=info msg="Forcibly stopping sandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\"" Nov 6 00:29:27.701288 containerd[1622]: time="2025-11-06T00:29:27.701266543Z" level=info msg="TearDown network for sandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" successfully" Nov 6 00:29:27.707081 containerd[1622]: time="2025-11-06T00:29:27.707026612Z" level=info msg="Ensure that sandbox ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1 in task-service has been cleanup successfully" Nov 6 00:29:27.710518 containerd[1622]: time="2025-11-06T00:29:27.710471322Z" level=info msg="RemovePodSandbox \"ef45dda15b8f631a84656102736233a5ff866d062d5d99f0324feb616a3073a1\" returns successfully" Nov 6 00:29:27.796605 kubelet[2801]: E1106 00:29:27.796535 2801 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:29:28.559294 kubelet[2801]: E1106 00:29:28.559234 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:28.561897 containerd[1622]: time="2025-11-06T00:29:28.560862880Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:29:28.844014 containerd[1622]: time="2025-11-06T00:29:28.843957600Z" level=info msg="Container b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:28.883518 containerd[1622]: time="2025-11-06T00:29:28.883450406Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\"" Nov 6 00:29:28.884331 containerd[1622]: time="2025-11-06T00:29:28.884239645Z" level=info msg="StartContainer for \"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\"" Nov 6 00:29:28.888406 containerd[1622]: time="2025-11-06T00:29:28.888348367Z" level=info msg="connecting to shim b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37" address="unix:///run/containerd/s/c6b821f316bbf8b98bc00f98918a24fb6429e64459424b8dedfbb7ca249a7c4f" protocol=ttrpc version=3 Nov 6 00:29:28.924170 systemd[1]: Started cri-containerd-b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37.scope - libcontainer container b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37. Nov 6 00:29:28.977962 systemd[1]: cri-containerd-b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37.scope: Deactivated successfully. Nov 6 00:29:28.979385 containerd[1622]: time="2025-11-06T00:29:28.979314875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\" id:\"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\" pid:4779 exited_at:{seconds:1762388968 nanos:978575390}" Nov 6 00:29:28.997260 containerd[1622]: time="2025-11-06T00:29:28.997121534Z" level=info msg="received exit event container_id:\"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\" id:\"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\" pid:4779 exited_at:{seconds:1762388968 nanos:978575390}" Nov 6 00:29:29.008930 containerd[1622]: time="2025-11-06T00:29:29.008883084Z" level=info msg="StartContainer for \"b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37\" returns successfully" Nov 6 00:29:29.026962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74a051a79d65b1154c413a9c6cf7b12bb22380b5f7dff3a3068d7a138ad5b37-rootfs.mount: Deactivated successfully. Nov 6 00:29:29.564011 kubelet[2801]: E1106 00:29:29.563957 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:29.566361 containerd[1622]: time="2025-11-06T00:29:29.566310508Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:29:29.678182 containerd[1622]: time="2025-11-06T00:29:29.678121230Z" level=info msg="Container a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:29.683217 kubelet[2801]: E1106 00:29:29.682911 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-7zpzd" podUID="10fc79f0-acf1-4ee0-9b28-69e38ac019e9" Nov 6 00:29:29.688555 containerd[1622]: time="2025-11-06T00:29:29.688491945Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\"" Nov 6 00:29:29.692087 containerd[1622]: time="2025-11-06T00:29:29.689971306Z" level=info msg="StartContainer for \"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\"" Nov 6 00:29:29.692087 containerd[1622]: time="2025-11-06T00:29:29.691312538Z" level=info msg="connecting to shim a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98" address="unix:///run/containerd/s/c6b821f316bbf8b98bc00f98918a24fb6429e64459424b8dedfbb7ca249a7c4f" protocol=ttrpc version=3 Nov 6 00:29:29.724066 systemd[1]: Started cri-containerd-a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98.scope - libcontainer container a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98. Nov 6 00:29:29.755541 systemd[1]: cri-containerd-a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98.scope: Deactivated successfully. Nov 6 00:29:29.756385 containerd[1622]: time="2025-11-06T00:29:29.756310048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\" id:\"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\" pid:4820 exited_at:{seconds:1762388969 nanos:755797080}" Nov 6 00:29:29.783024 containerd[1622]: time="2025-11-06T00:29:29.782915073Z" level=info msg="received exit event container_id:\"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\" id:\"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\" pid:4820 exited_at:{seconds:1762388969 nanos:755797080}" Nov 6 00:29:29.791931 containerd[1622]: time="2025-11-06T00:29:29.791876370Z" level=info msg="StartContainer for \"a253888161af1355b36774f9dd3622a3478880271393075fbc100d73cfd02b98\" returns successfully" Nov 6 00:29:30.571497 kubelet[2801]: E1106 00:29:30.571428 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:30.574267 containerd[1622]: time="2025-11-06T00:29:30.574207011Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:29:30.775625 containerd[1622]: time="2025-11-06T00:29:30.775550518Z" level=info msg="Container 108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:30.785996 containerd[1622]: time="2025-11-06T00:29:30.785936581Z" level=info msg="CreateContainer within sandbox \"328870e4a16a7d3d80b4fe8562ebcbfde15769527dbfac3082083f528c1ff638\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\"" Nov 6 00:29:30.786770 containerd[1622]: time="2025-11-06T00:29:30.786741259Z" level=info msg="StartContainer for \"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\"" Nov 6 00:29:30.788117 containerd[1622]: time="2025-11-06T00:29:30.788091087Z" level=info msg="connecting to shim 108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b" address="unix:///run/containerd/s/c6b821f316bbf8b98bc00f98918a24fb6429e64459424b8dedfbb7ca249a7c4f" protocol=ttrpc version=3 Nov 6 00:29:30.825124 systemd[1]: Started cri-containerd-108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b.scope - libcontainer container 108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b. Nov 6 00:29:30.861759 containerd[1622]: time="2025-11-06T00:29:30.861718018Z" level=info msg="StartContainer for \"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" returns successfully" Nov 6 00:29:31.009852 containerd[1622]: time="2025-11-06T00:29:31.009762693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" id:\"f4dd6e00defff4d216c2cf823f862234d025a595dc47001078a286589358dffd\" pid:4888 exited_at:{seconds:1762388971 nanos:9361946}" Nov 6 00:29:31.058085 kubelet[2801]: I1106 00:29:31.057945 2801 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T00:29:31Z","lastTransitionTime":"2025-11-06T00:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 00:29:31.483028 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 6 00:29:31.579736 kubelet[2801]: E1106 00:29:31.579677 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:31.598622 kubelet[2801]: I1106 00:29:31.598535 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-phnnl" podStartSLOduration=5.598502762 podStartE2EDuration="5.598502762s" podCreationTimestamp="2025-11-06 00:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:31.597491805 +0000 UTC m=+124.063946843" watchObservedRunningTime="2025-11-06 00:29:31.598502762 +0000 UTC m=+124.064957800" Nov 6 00:29:31.682594 kubelet[2801]: E1106 00:29:31.682507 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-7zpzd" podUID="10fc79f0-acf1-4ee0-9b28-69e38ac019e9" Nov 6 00:29:32.582745 kubelet[2801]: E1106 00:29:32.582693 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:33.144440 containerd[1622]: time="2025-11-06T00:29:33.144376829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" id:\"50946209ca4f1a10871db534c88daa3c94587de40c97c459d0180fe4f5248103\" pid:5028 exit_status:1 exited_at:{seconds:1762388973 nanos:143937400}" Nov 6 00:29:33.683742 kubelet[2801]: E1106 00:29:33.682839 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:34.938133 systemd-networkd[1517]: lxc_health: Link UP Nov 6 00:29:34.939804 systemd-networkd[1517]: lxc_health: Gained carrier Nov 6 00:29:35.274034 containerd[1622]: time="2025-11-06T00:29:35.273518454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" id:\"7e052f8d6adb3fcc4cb0d2af3073fe03a4ef3dec970e39cd5bd977bf13cc0311\" pid:5441 exited_at:{seconds:1762388975 nanos:272924153}" Nov 6 00:29:36.582648 kubelet[2801]: E1106 00:29:36.582580 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:36.601466 kubelet[2801]: E1106 00:29:36.601127 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:36.822195 systemd-networkd[1517]: lxc_health: Gained IPv6LL Nov 6 00:29:37.453594 containerd[1622]: time="2025-11-06T00:29:37.453531267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" id:\"a062700bc8fbdb3ac0f2a58384ce6053b1c761e4298ad28ce23ca97fe12f680c\" pid:5479 exited_at:{seconds:1762388977 nanos:453068384}" Nov 6 00:29:37.592895 kubelet[2801]: E1106 00:29:37.592815 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:39.563075 containerd[1622]: time="2025-11-06T00:29:39.563012338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" id:\"6c2ebe931004c688c4df0a88eaeba83cffa3f8ad3b9a1ba389744647f55e2921\" pid:5512 exited_at:{seconds:1762388979 nanos:562457100}" Nov 6 00:29:41.994087 containerd[1622]: time="2025-11-06T00:29:41.993994898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108f5bdcef093472b80cd1bb894ffc215687e5d8a118c1304403ced5c415423b\" id:\"5a5d1e5e4349a6733aa0edcb1e806d0585898a3b858478e000790460a638ae67\" pid:5537 exited_at:{seconds:1762388981 nanos:993476591}" Nov 6 00:29:42.009654 sshd[4625]: Connection closed by 10.0.0.1 port 37910 Nov 6 00:29:42.010233 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:42.016421 systemd[1]: sshd@28-10.0.0.88:22-10.0.0.1:37910.service: Deactivated successfully. Nov 6 00:29:42.019155 systemd[1]: session-29.scope: Deactivated successfully. Nov 6 00:29:42.020581 systemd-logind[1591]: Session 29 logged out. Waiting for processes to exit. Nov 6 00:29:42.022246 systemd-logind[1591]: Removed session 29.