Nov 8 00:20:51.942391 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:20:51.942413 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:51.942425 kernel: BIOS-provided physical RAM map: Nov 8 00:20:51.942431 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:20:51.942437 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 8 00:20:51.942443 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 8 00:20:51.942450 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 8 00:20:51.942457 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 8 00:20:51.942463 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 8 00:20:51.942469 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 8 00:20:51.942478 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 8 00:20:51.942484 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 8 00:20:51.942494 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 8 00:20:51.942500 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 8 00:20:51.942510 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 8 00:20:51.942517 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 8 00:20:51.942527 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 8 00:20:51.942533 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 8 00:20:51.942540 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 8 00:20:51.942547 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:20:51.942553 kernel: NX (Execute Disable) protection: active Nov 8 00:20:51.942560 kernel: APIC: Static calls initialized Nov 8 00:20:51.942566 kernel: efi: EFI v2.7 by EDK II Nov 8 00:20:51.942573 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 8 00:20:51.942580 kernel: SMBIOS 2.8 present. Nov 8 00:20:51.942587 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 8 00:20:51.942593 kernel: Hypervisor detected: KVM Nov 8 00:20:51.942602 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:20:51.942609 kernel: kvm-clock: using sched offset of 7577029874 cycles Nov 8 00:20:51.942616 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:20:51.942623 kernel: tsc: Detected 2794.750 MHz processor Nov 8 00:20:51.942630 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:20:51.942637 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:20:51.942644 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 8 00:20:51.942651 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:20:51.942658 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:20:51.942668 kernel: Using GB pages for direct mapping Nov 8 00:20:51.942674 kernel: Secure boot disabled Nov 8 00:20:51.942681 kernel: ACPI: Early table checksum verification disabled Nov 8 00:20:51.942688 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 8 00:20:51.942699 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:20:51.942706 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:51.942713 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:51.942723 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 8 00:20:51.942730 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:51.942740 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:51.942747 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:51.942767 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:51.942775 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:20:51.942782 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 8 00:20:51.942792 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 8 00:20:51.942800 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 8 00:20:51.942807 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 8 00:20:51.942814 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 8 00:20:51.942821 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 8 00:20:51.942828 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 8 00:20:51.942835 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 8 00:20:51.942842 kernel: No NUMA configuration found Nov 8 00:20:51.942852 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 8 00:20:51.942862 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 8 00:20:51.942869 kernel: Zone ranges: Nov 8 00:20:51.942876 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:20:51.942884 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 8 00:20:51.942891 kernel: Normal empty Nov 8 00:20:51.942898 kernel: Movable zone start for each node Nov 8 00:20:51.942905 kernel: Early memory node ranges Nov 8 00:20:51.942912 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:20:51.942919 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 8 00:20:51.942926 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 8 00:20:51.942936 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 8 00:20:51.942943 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 8 00:20:51.942950 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 8 00:20:51.942959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 8 00:20:51.942967 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:20:51.942974 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:20:51.942981 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 8 00:20:51.942988 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:20:51.942995 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 8 00:20:51.943005 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:20:51.943012 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 8 00:20:51.943019 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:20:51.943026 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:20:51.943033 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:20:51.943041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:20:51.943048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:20:51.943055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:20:51.943062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:20:51.943072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:20:51.943079 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:20:51.943086 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:20:51.943093 kernel: TSC deadline timer available Nov 8 00:20:51.943100 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:20:51.943107 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:20:51.943115 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:20:51.943122 kernel: kvm-guest: setup PV sched yield Nov 8 00:20:51.943129 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 8 00:20:51.943138 kernel: Booting paravirtualized kernel on KVM Nov 8 00:20:51.943146 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:20:51.943153 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:20:51.943160 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:20:51.943168 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:20:51.943175 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:20:51.943181 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:20:51.943189 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:20:51.943197 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:51.943209 kernel: random: crng init done Nov 8 00:20:51.943216 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:20:51.943224 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:20:51.943231 kernel: Fallback order for Node 0: 0 Nov 8 00:20:51.943238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 8 00:20:51.943245 kernel: Policy zone: DMA32 Nov 8 00:20:51.943253 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:20:51.943260 kernel: Memory: 2400596K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 166144K reserved, 0K cma-reserved) Nov 8 00:20:51.943270 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:20:51.943277 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:20:51.943285 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:20:51.943292 kernel: Dynamic Preempt: voluntary Nov 8 00:20:51.943299 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:20:51.943319 kernel: rcu: RCU event tracing is enabled. Nov 8 00:20:51.943329 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:20:51.943337 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:20:51.943344 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:20:51.943352 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:20:51.943359 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:20:51.943374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:20:51.943384 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:20:51.943392 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:20:51.943399 kernel: Console: colour dummy device 80x25 Nov 8 00:20:51.943407 kernel: printk: console [ttyS0] enabled Nov 8 00:20:51.943416 kernel: ACPI: Core revision 20230628 Nov 8 00:20:51.943427 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:20:51.943435 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:20:51.943442 kernel: x2apic enabled Nov 8 00:20:51.943450 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:20:51.943457 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:20:51.943465 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:20:51.943473 kernel: kvm-guest: setup PV IPIs Nov 8 00:20:51.943480 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:20:51.943488 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:20:51.943498 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 8 00:20:51.943506 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:20:51.943513 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:20:51.943521 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:20:51.943528 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:20:51.943536 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:20:51.943543 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:20:51.943551 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:20:51.943559 kernel: active return thunk: retbleed_return_thunk Nov 8 00:20:51.943569 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:20:51.943576 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:20:51.943584 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:20:51.943592 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:20:51.943602 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:20:51.943609 kernel: active return thunk: srso_return_thunk Nov 8 00:20:51.943617 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:20:51.943624 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:20:51.943632 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:20:51.943642 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:20:51.943649 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:20:51.943657 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:20:51.943665 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:20:51.943672 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:20:51.943680 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:20:51.943687 kernel: landlock: Up and running. Nov 8 00:20:51.943695 kernel: SELinux: Initializing. Nov 8 00:20:51.943702 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:20:51.943712 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:20:51.943720 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:20:51.943728 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:20:51.943736 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:20:51.943743 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:20:51.943751 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:20:51.943880 kernel: ... version: 0 Nov 8 00:20:51.943889 kernel: ... bit width: 48 Nov 8 00:20:51.943901 kernel: ... generic registers: 6 Nov 8 00:20:51.943911 kernel: ... value mask: 0000ffffffffffff Nov 8 00:20:51.943919 kernel: ... max period: 00007fffffffffff Nov 8 00:20:51.943926 kernel: ... fixed-purpose events: 0 Nov 8 00:20:51.943934 kernel: ... event mask: 000000000000003f Nov 8 00:20:51.943941 kernel: signal: max sigframe size: 1776 Nov 8 00:20:51.943949 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:20:51.943957 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:20:51.943964 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:20:51.943972 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:20:51.943982 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:20:51.943989 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:20:51.943996 kernel: smpboot: Max logical packages: 1 Nov 8 00:20:51.944004 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 8 00:20:51.944014 kernel: devtmpfs: initialized Nov 8 00:20:51.944022 kernel: x86/mm: Memory block size: 128MB Nov 8 00:20:51.944030 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 8 00:20:51.944037 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 8 00:20:51.944045 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 8 00:20:51.944055 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 8 00:20:51.944063 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 8 00:20:51.944071 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:20:51.944078 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:20:51.944086 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:20:51.944093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:20:51.944101 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:20:51.944108 kernel: audit: type=2000 audit(1762561250.081:1): state=initialized audit_enabled=0 res=1 Nov 8 00:20:51.944118 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:20:51.944126 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:20:51.944133 kernel: cpuidle: using governor menu Nov 8 00:20:51.944141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:20:51.944148 kernel: dca service started, version 1.12.1 Nov 8 00:20:51.944156 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:20:51.944163 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:20:51.944171 kernel: PCI: Using configuration type 1 for base access Nov 8 00:20:51.944179 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:20:51.944189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:20:51.944196 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:20:51.944204 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:20:51.944211 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:20:51.944219 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:20:51.944226 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:20:51.944234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:20:51.944241 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:20:51.944249 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:20:51.944258 kernel: ACPI: Interpreter enabled Nov 8 00:20:51.944266 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:20:51.944273 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:20:51.944281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:20:51.944289 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:20:51.944296 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:20:51.944303 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:20:51.944516 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:20:51.944655 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:20:51.944801 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:20:51.944812 kernel: PCI host bridge to bus 0000:00 Nov 8 00:20:51.944957 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:20:51.945084 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:20:51.945201 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:20:51.945316 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:20:51.945447 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:20:51.945562 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 8 00:20:51.945676 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:20:51.945868 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:20:51.946013 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:20:51.946140 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 8 00:20:51.946263 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 8 00:20:51.946402 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:20:51.946529 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 8 00:20:51.946654 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:20:51.946818 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:20:51.946947 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 8 00:20:51.947071 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 8 00:20:51.947206 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 8 00:20:51.947388 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:20:51.947608 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 8 00:20:51.947784 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 8 00:20:51.947919 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 8 00:20:51.948066 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:20:51.948195 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 8 00:20:51.948327 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 8 00:20:51.948472 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 8 00:20:51.948602 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 8 00:20:51.948742 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:20:51.948887 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:20:51.949032 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:20:51.949159 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 8 00:20:51.949296 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 8 00:20:51.949444 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:20:51.949572 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 8 00:20:51.949583 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:20:51.949591 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:20:51.949599 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:20:51.949606 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:20:51.949614 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:20:51.949626 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:20:51.949633 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:20:51.949642 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:20:51.949649 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:20:51.949657 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:20:51.949664 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:20:51.949672 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:20:51.949680 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:20:51.949687 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:20:51.949697 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:20:51.949705 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:20:51.949712 kernel: iommu: Default domain type: Translated Nov 8 00:20:51.949720 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:20:51.949728 kernel: efivars: Registered efivars operations Nov 8 00:20:51.949735 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:20:51.949743 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:20:51.949751 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 8 00:20:51.949770 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 8 00:20:51.949781 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 8 00:20:51.949788 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 8 00:20:51.949918 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:20:51.950044 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:20:51.950170 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:20:51.950180 kernel: vgaarb: loaded Nov 8 00:20:51.950188 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:20:51.950195 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:20:51.950203 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:20:51.950215 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:20:51.950223 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:20:51.950230 kernel: pnp: PnP ACPI init Nov 8 00:20:51.950398 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:20:51.950410 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:20:51.950417 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:20:51.950425 kernel: NET: Registered PF_INET protocol family Nov 8 00:20:51.950433 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:20:51.950445 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:20:51.950453 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:20:51.950460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:20:51.950468 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:20:51.950476 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:20:51.950483 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:20:51.950491 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:20:51.950499 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:20:51.950510 kernel: NET: Registered PF_XDP protocol family Nov 8 00:20:51.950692 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 8 00:20:51.950876 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 8 00:20:51.951016 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:20:51.951161 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:20:51.951278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:20:51.951402 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:20:51.951519 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:20:51.951640 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 8 00:20:51.951650 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:20:51.951658 kernel: Initialise system trusted keyrings Nov 8 00:20:51.951666 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:20:51.951673 kernel: Key type asymmetric registered Nov 8 00:20:51.951681 kernel: Asymmetric key parser 'x509' registered Nov 8 00:20:51.951689 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:20:51.951697 kernel: io scheduler mq-deadline registered Nov 8 00:20:51.951705 kernel: io scheduler kyber registered Nov 8 00:20:51.951716 kernel: io scheduler bfq registered Nov 8 00:20:51.951723 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:20:51.951732 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:20:51.951740 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:20:51.951747 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:20:51.951805 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:20:51.951813 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:20:51.951820 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:20:51.951828 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:20:51.951839 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:20:51.952007 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:20:51.952019 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:20:51.952138 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:20:51.952256 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:20:51 UTC (1762561251) Nov 8 00:20:51.952383 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:20:51.952393 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:20:51.952401 kernel: efifb: probing for efifb Nov 8 00:20:51.952413 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 8 00:20:51.952421 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 8 00:20:51.952429 kernel: efifb: scrolling: redraw Nov 8 00:20:51.952437 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 8 00:20:51.952444 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:20:51.952464 kernel: fb0: EFI VGA frame buffer device Nov 8 00:20:51.952493 kernel: pstore: Using crash dump compression: deflate Nov 8 00:20:51.952504 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:20:51.952512 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:20:51.952522 kernel: Segment Routing with IPv6 Nov 8 00:20:51.952530 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:20:51.952538 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:20:51.952546 kernel: Key type dns_resolver registered Nov 8 00:20:51.952554 kernel: IPI shorthand broadcast: enabled Nov 8 00:20:51.952562 kernel: sched_clock: Marking stable (1149002294, 205109230)->(1482972022, -128860498) Nov 8 00:20:51.952570 kernel: registered taskstats version 1 Nov 8 00:20:51.952578 kernel: Loading compiled-in X.509 certificates Nov 8 00:20:51.952586 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:20:51.952597 kernel: Key type .fscrypt registered Nov 8 00:20:51.952604 kernel: Key type fscrypt-provisioning registered Nov 8 00:20:51.952615 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:20:51.952626 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:20:51.952636 kernel: ima: No architecture policies found Nov 8 00:20:51.952646 kernel: clk: Disabling unused clocks Nov 8 00:20:51.952657 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:20:51.952667 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:20:51.952675 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:20:51.952686 kernel: Run /init as init process Nov 8 00:20:51.952694 kernel: with arguments: Nov 8 00:20:51.952702 kernel: /init Nov 8 00:20:51.952710 kernel: with environment: Nov 8 00:20:51.952717 kernel: HOME=/ Nov 8 00:20:51.952725 kernel: TERM=linux Nov 8 00:20:51.952736 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:20:51.952746 systemd[1]: Detected virtualization kvm. Nov 8 00:20:51.952770 systemd[1]: Detected architecture x86-64. Nov 8 00:20:51.952778 systemd[1]: Running in initrd. Nov 8 00:20:51.952790 systemd[1]: No hostname configured, using default hostname. Nov 8 00:20:51.952798 systemd[1]: Hostname set to . Nov 8 00:20:51.952807 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:20:51.952817 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:20:51.952826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:51.952834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:51.952844 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:20:51.952852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:20:51.952861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:20:51.952870 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:20:51.952882 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:20:51.952891 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:20:51.952900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:51.952908 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:51.952917 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:20:51.952925 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:20:51.952934 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:20:51.952942 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:20:51.952953 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:20:51.952962 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:20:51.952971 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:20:51.952979 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:20:51.952988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:51.952996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:51.953005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:51.953013 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:20:51.953024 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:20:51.953033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:20:51.953042 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:20:51.953050 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:20:51.953059 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:20:51.953067 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:20:51.953076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:51.953085 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:20:51.953093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:51.953127 systemd-journald[192]: Collecting audit messages is disabled. Nov 8 00:20:51.953146 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:20:51.953159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:20:51.953168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:51.953176 systemd-journald[192]: Journal started Nov 8 00:20:51.953194 systemd-journald[192]: Runtime Journal (/run/log/journal/cc7af1fc1332480d8d89355b86e2ad8c) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:20:51.952488 systemd-modules-load[194]: Inserted module 'overlay' Nov 8 00:20:51.970769 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:51.974450 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:20:51.974961 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:51.981969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:20:51.990103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:20:52.017987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:52.021777 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:20:52.026868 kernel: Bridge firewalling registered Nov 8 00:20:52.025900 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 8 00:20:52.032948 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:20:52.033864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:52.036689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:52.039952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:52.045660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:20:52.059521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:52.065598 dracut-cmdline[222]: dracut-dracut-053 Nov 8 00:20:52.069845 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:52.066918 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:20:52.106011 systemd-resolved[235]: Positive Trust Anchors: Nov 8 00:20:52.106024 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:20:52.106056 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:20:52.108563 systemd-resolved[235]: Defaulting to hostname 'linux'. Nov 8 00:20:52.109802 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:20:52.110623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:52.195787 kernel: SCSI subsystem initialized Nov 8 00:20:52.204773 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:20:52.215777 kernel: iscsi: registered transport (tcp) Nov 8 00:20:52.237043 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:20:52.237071 kernel: QLogic iSCSI HBA Driver Nov 8 00:20:52.284179 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:20:52.307907 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:20:52.332783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:20:52.332811 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:20:52.334782 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:20:52.374778 kernel: raid6: avx2x4 gen() 30561 MB/s Nov 8 00:20:52.391773 kernel: raid6: avx2x2 gen() 31218 MB/s Nov 8 00:20:52.409510 kernel: raid6: avx2x1 gen() 26024 MB/s Nov 8 00:20:52.409529 kernel: raid6: using algorithm avx2x2 gen() 31218 MB/s Nov 8 00:20:52.427785 kernel: raid6: .... xor() 19942 MB/s, rmw enabled Nov 8 00:20:52.427810 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:20:52.451810 kernel: xor: automatically using best checksumming function avx Nov 8 00:20:52.631812 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:20:52.646625 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:20:52.664111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:52.680359 systemd-udevd[414]: Using default interface naming scheme 'v255'. Nov 8 00:20:52.686984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:52.696121 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:20:52.709988 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Nov 8 00:20:52.748666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:20:52.758049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:20:52.839147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:52.855976 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:20:52.873821 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:20:52.878708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:20:52.883441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:52.885604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:20:52.894895 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:20:52.903177 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:20:52.912900 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:20:52.921567 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:20:52.921809 kernel: libata version 3.00 loaded. Nov 8 00:20:52.929358 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:20:52.937801 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:20:52.943819 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:20:52.943897 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:20:52.943927 kernel: GPT:9289727 != 19775487 Nov 8 00:20:52.943953 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:20:52.943965 kernel: GPT:9289727 != 19775487 Nov 8 00:20:52.943977 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:20:52.947209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:52.949884 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:20:52.950325 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:20:52.958497 kernel: AES CTR mode by8 optimization enabled Nov 8 00:20:52.958516 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:20:52.958706 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:20:52.950558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:52.963503 kernel: scsi host0: ahci Nov 8 00:20:52.963708 kernel: scsi host1: ahci Nov 8 00:20:52.962900 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:52.969727 kernel: scsi host2: ahci Nov 8 00:20:52.967688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:52.977391 kernel: scsi host3: ahci Nov 8 00:20:52.967950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:52.982257 kernel: scsi host4: ahci Nov 8 00:20:52.982465 kernel: scsi host5: ahci Nov 8 00:20:52.969514 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:52.999131 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Nov 8 00:20:52.999150 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (475) Nov 8 00:20:52.999166 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Nov 8 00:20:52.999199 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Nov 8 00:20:52.999213 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Nov 8 00:20:52.999225 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Nov 8 00:20:52.999235 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Nov 8 00:20:52.999245 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Nov 8 00:20:52.980182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:53.007860 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:20:53.013117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:20:53.026807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:20:53.032675 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:20:53.034786 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:20:53.053977 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:20:53.055825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:53.064871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:53.055900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:53.069381 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:53.069396 disk-uuid[554]: Primary Header is updated. Nov 8 00:20:53.069396 disk-uuid[554]: Secondary Entries is updated. Nov 8 00:20:53.069396 disk-uuid[554]: Secondary Header is updated. Nov 8 00:20:53.060454 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:53.066192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:53.084152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:53.093916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:53.120653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:53.310382 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:53.310489 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:53.312802 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:53.313792 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:53.315815 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:20:53.317930 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:20:53.317979 kernel: ata3.00: applying bridge limits Nov 8 00:20:53.320372 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:53.322360 kernel: ata3.00: configured for UDMA/100 Nov 8 00:20:53.326152 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:20:53.388692 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:20:53.389206 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:20:53.408035 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:20:54.162797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:54.162896 disk-uuid[555]: The operation has completed successfully. Nov 8 00:20:54.277171 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:20:54.277395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:20:54.311075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:20:54.333335 sh[596]: Success Nov 8 00:20:54.377794 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:20:54.450398 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:20:54.482493 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:20:54.483359 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:20:54.526207 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:20:54.526288 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:54.526313 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:20:54.531724 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:20:54.531847 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:20:54.567224 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:20:54.569164 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:20:54.582011 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:20:54.590291 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:20:54.625821 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:54.625905 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:54.625920 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:54.644135 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:54.668124 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:20:54.670464 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:54.722951 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:20:54.739147 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:20:54.818399 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:20:54.840064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:20:54.866241 ignition[726]: Ignition 2.19.0 Nov 8 00:20:54.867098 ignition[726]: Stage: fetch-offline Nov 8 00:20:54.867154 ignition[726]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:54.867168 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:54.867321 ignition[726]: parsed url from cmdline: "" Nov 8 00:20:54.867326 ignition[726]: no config URL provided Nov 8 00:20:54.867335 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:20:54.867351 ignition[726]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:20:54.867395 ignition[726]: op(1): [started] loading QEMU firmware config module Nov 8 00:20:54.867404 ignition[726]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:20:54.885979 systemd-networkd[780]: lo: Link UP Nov 8 00:20:54.885985 systemd-networkd[780]: lo: Gained carrier Nov 8 00:20:54.888094 systemd-networkd[780]: Enumeration completed Nov 8 00:20:54.888634 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:54.888639 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:20:54.890148 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:20:54.907155 ignition[726]: op(1): [finished] loading QEMU firmware config module Nov 8 00:20:54.890251 systemd-networkd[780]: eth0: Link UP Nov 8 00:20:54.890256 systemd-networkd[780]: eth0: Gained carrier Nov 8 00:20:54.890264 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:54.902629 systemd[1]: Reached target network.target - Network. Nov 8 00:20:54.934866 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:20:55.014310 ignition[726]: parsing config with SHA512: 9a47bae1630746552b5ce0224847f4834afe9e30999b8d384c27f9dfc1073a6328e875992aa870c067f735e4920dd7b06317fc437a482533b0c45f699dc2640e Nov 8 00:20:55.026080 unknown[726]: fetched base config from "system" Nov 8 00:20:55.026097 unknown[726]: fetched user config from "qemu" Nov 8 00:20:55.026912 ignition[726]: fetch-offline: fetch-offline passed Nov 8 00:20:55.029567 systemd-resolved[235]: Detected conflict on linux IN A 10.0.0.54 Nov 8 00:20:55.027029 ignition[726]: Ignition finished successfully Nov 8 00:20:55.029581 systemd-resolved[235]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Nov 8 00:20:55.036560 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:20:55.042429 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:20:55.058196 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:20:55.121440 ignition[788]: Ignition 2.19.0 Nov 8 00:20:55.121455 ignition[788]: Stage: kargs Nov 8 00:20:55.121966 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:55.136004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:20:55.121986 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:55.123154 ignition[788]: kargs: kargs passed Nov 8 00:20:55.123210 ignition[788]: Ignition finished successfully Nov 8 00:20:55.169051 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:20:55.315973 ignition[796]: Ignition 2.19.0 Nov 8 00:20:55.315998 ignition[796]: Stage: disks Nov 8 00:20:55.323574 ignition[796]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:55.325351 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:55.328606 ignition[796]: disks: disks passed Nov 8 00:20:55.328692 ignition[796]: Ignition finished successfully Nov 8 00:20:55.335093 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:20:55.338680 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:20:55.341730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:20:55.344217 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:20:55.346364 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:20:55.349522 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:20:55.376097 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:20:55.404676 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:20:55.464965 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:20:55.496885 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:20:56.001808 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:20:56.003156 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:20:56.005824 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:20:56.028861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:20:56.031519 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:20:56.038806 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Nov 8 00:20:56.034068 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:20:56.050110 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:56.050130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:56.050141 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:56.050151 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:56.034116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:20:56.034145 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:20:56.040557 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:20:56.051296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:20:56.057684 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:20:56.103052 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:20:56.110603 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:20:56.116973 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:20:56.122706 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:20:56.232384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:20:56.251865 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:20:56.255832 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:20:56.261040 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:20:56.263598 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:56.284735 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:20:56.289091 ignition[928]: INFO : Ignition 2.19.0 Nov 8 00:20:56.289091 ignition[928]: INFO : Stage: mount Nov 8 00:20:56.291636 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:56.291636 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:56.291636 ignition[928]: INFO : mount: mount passed Nov 8 00:20:56.291636 ignition[928]: INFO : Ignition finished successfully Nov 8 00:20:56.299870 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:20:56.313940 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:20:56.556067 systemd-networkd[780]: eth0: Gained IPv6LL Nov 8 00:20:57.015930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:20:57.025785 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Nov 8 00:20:57.029286 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:57.029313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:57.029327 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:57.033784 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:57.035317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:20:57.071655 ignition[959]: INFO : Ignition 2.19.0 Nov 8 00:20:57.071655 ignition[959]: INFO : Stage: files Nov 8 00:20:57.074547 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:57.074547 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:57.074547 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:20:57.074547 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:20:57.074547 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:20:57.085116 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:20:57.085116 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:20:57.085116 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:20:57.085116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:20:57.085116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:20:57.076849 unknown[959]: wrote ssh authorized keys file for user: core Nov 8 00:20:57.123299 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:20:57.200420 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:20:57.200420 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:20:57.206467 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 8 00:20:57.422667 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:20:57.848883 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:20:57.848883 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:20:57.854588 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:20:57.857316 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:20:57.860178 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:20:57.866980 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:20:57.870431 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:20:57.873550 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:20:57.877051 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:20:57.880768 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:20:57.884314 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:20:57.887631 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:20:57.892707 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:20:57.897463 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:20:57.901335 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:20:58.267990 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:20:59.100020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:20:59.100020 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:20:59.106170 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:20:59.199610 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:20:59.207842 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:20:59.210436 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:20:59.210436 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:20:59.210436 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:20:59.210436 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:20:59.210436 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:20:59.210436 ignition[959]: INFO : files: files passed Nov 8 00:20:59.210436 ignition[959]: INFO : Ignition finished successfully Nov 8 00:20:59.228003 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:20:59.237943 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:20:59.241942 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:20:59.243256 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:20:59.243367 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:20:59.275557 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:20:59.280248 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:59.280248 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:59.286068 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:59.288983 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:20:59.293880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:20:59.311933 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:20:59.340659 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:20:59.342368 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:20:59.346493 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:20:59.349785 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:20:59.353096 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:20:59.370980 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:20:59.389209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:20:59.398029 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:20:59.408556 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:59.412473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:59.416320 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:20:59.419486 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:20:59.421086 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:20:59.425160 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:20:59.428437 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:20:59.431433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:20:59.434932 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:20:59.438626 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:20:59.442212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:20:59.445522 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:20:59.449484 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:20:59.452780 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:20:59.456011 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:20:59.458637 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:20:59.460252 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:20:59.464037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:59.467989 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:59.472196 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:20:59.473913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:59.478472 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:20:59.480213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:20:59.484121 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:20:59.485880 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:20:59.489722 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:20:59.492574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:20:59.495857 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:59.500645 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:20:59.503618 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:20:59.506783 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:20:59.508311 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:20:59.511510 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:20:59.512909 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:20:59.516244 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:20:59.518099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:20:59.522432 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:20:59.524007 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:20:59.542042 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:20:59.546583 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:20:59.549734 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:20:59.551608 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:59.555861 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:20:59.557601 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:20:59.562049 ignition[1013]: INFO : Ignition 2.19.0 Nov 8 00:20:59.562049 ignition[1013]: INFO : Stage: umount Nov 8 00:20:59.564906 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:59.564906 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:59.564906 ignition[1013]: INFO : umount: umount passed Nov 8 00:20:59.564906 ignition[1013]: INFO : Ignition finished successfully Nov 8 00:20:59.568878 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:20:59.569096 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:20:59.571787 systemd[1]: Stopped target network.target - Network. Nov 8 00:20:59.578289 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:20:59.578392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:20:59.581955 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:20:59.582032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:20:59.584952 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:20:59.585055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:20:59.588060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:20:59.588145 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:20:59.591613 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:20:59.594777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:20:59.598895 systemd-networkd[780]: eth0: DHCPv6 lease lost Nov 8 00:20:59.599131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:20:59.599893 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:20:59.600037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:20:59.603893 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:20:59.604042 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:20:59.607292 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:20:59.607422 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:20:59.610638 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:20:59.610858 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:20:59.620147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:20:59.620222 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:59.622560 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:20:59.622631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:20:59.633986 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:20:59.636310 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:20:59.636385 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:20:59.640550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:20:59.640616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:59.643781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:20:59.643853 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:59.647378 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:20:59.647432 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:59.651078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:59.665834 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:20:59.666006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:20:59.669327 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:20:59.669507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:59.673961 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:20:59.674031 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:59.676339 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:20:59.676384 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:59.680022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:20:59.680090 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:20:59.683582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:20:59.683657 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:20:59.686979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:20:59.687050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:59.701123 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:20:59.704053 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:20:59.704164 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:59.707569 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:20:59.707653 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:59.711079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:20:59.711143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:59.714875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:59.714949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:59.719606 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:20:59.719773 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:20:59.723577 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:20:59.736166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:20:59.748240 systemd[1]: Switching root. Nov 8 00:20:59.784557 systemd-journald[192]: Journal stopped Nov 8 00:21:01.395120 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 8 00:21:01.395217 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:21:01.395240 kernel: SELinux: policy capability open_perms=1 Nov 8 00:21:01.395262 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:21:01.395277 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:21:01.395292 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:21:01.395311 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:21:01.395327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:21:01.395349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:21:01.395365 kernel: audit: type=1403 audit(1762561260.448:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:21:01.395394 systemd[1]: Successfully loaded SELinux policy in 47.740ms. Nov 8 00:21:01.395419 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.543ms. Nov 8 00:21:01.395436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:21:01.395452 systemd[1]: Detected virtualization kvm. Nov 8 00:21:01.395468 systemd[1]: Detected architecture x86-64. Nov 8 00:21:01.395484 systemd[1]: Detected first boot. Nov 8 00:21:01.395503 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:21:01.395518 zram_generator::config[1058]: No configuration found. Nov 8 00:21:01.395541 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:21:01.395557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:21:01.395574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:21:01.395590 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:21:01.395610 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:21:01.395626 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:21:01.395642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:21:01.395662 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:21:01.395683 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:21:01.395701 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:21:01.395717 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:21:01.395733 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:21:01.395749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:21:01.395798 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:21:01.395816 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:21:01.395837 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:21:01.395853 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:21:01.395869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:21:01.395885 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:21:01.395901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:21:01.395917 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:21:01.395933 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:21:01.395949 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:21:01.395968 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:21:01.395984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:21:01.396000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:21:01.396017 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:21:01.396033 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:21:01.396049 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:21:01.396066 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:21:01.396084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:21:01.396100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:21:01.396120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:21:01.396137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:21:01.396163 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:21:01.396179 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:21:01.396196 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:21:01.396213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:01.396229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:21:01.396247 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:21:01.396263 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:21:01.396285 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:21:01.396303 systemd[1]: Reached target machines.target - Containers. Nov 8 00:21:01.396319 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:21:01.396336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:01.396353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:21:01.396370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:21:01.396387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:01.396404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:21:01.396425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:21:01.396442 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:21:01.396461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:21:01.396478 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:21:01.396494 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:21:01.396511 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:21:01.396528 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:21:01.396544 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:21:01.396561 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:21:01.396606 systemd-journald[1121]: Collecting audit messages is disabled. Nov 8 00:21:01.396644 kernel: loop: module loaded Nov 8 00:21:01.396660 kernel: fuse: init (API version 7.39) Nov 8 00:21:01.396677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:21:01.396693 systemd-journald[1121]: Journal started Nov 8 00:21:01.396722 systemd-journald[1121]: Runtime Journal (/run/log/journal/cc7af1fc1332480d8d89355b86e2ad8c) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:21:01.016860 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:21:01.046371 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:21:01.046887 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:21:01.435787 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:21:01.438776 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:21:01.449606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:21:01.449670 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:21:01.451004 systemd[1]: Stopped verity-setup.service. Nov 8 00:21:01.493786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:01.495772 kernel: ACPI: bus type drm_connector registered Nov 8 00:21:01.495805 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:21:01.499224 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:21:01.501146 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:21:01.503040 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:21:01.504802 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:21:01.506701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:21:01.508785 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:21:01.510599 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:21:01.512976 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:21:01.513154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:21:01.546657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:01.546854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:01.561295 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:21:01.561473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:21:01.563524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:21:01.563696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:21:01.566152 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:21:01.566327 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:21:01.568426 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:21:01.568599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:21:01.570675 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:21:01.572989 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:21:01.575317 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:21:01.590093 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:21:01.597919 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:21:01.601201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:21:01.603106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:21:01.603154 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:21:01.605932 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:21:01.609273 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:21:01.612472 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:21:01.614338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:01.634208 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:21:01.639932 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:21:01.642118 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:21:01.643537 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:21:01.647085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:21:01.648418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:21:01.665097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:21:01.681108 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:21:01.687874 systemd-journald[1121]: Time spent on flushing to /var/log/journal/cc7af1fc1332480d8d89355b86e2ad8c is 18.547ms for 999 entries. Nov 8 00:21:01.687874 systemd-journald[1121]: System Journal (/var/log/journal/cc7af1fc1332480d8d89355b86e2ad8c) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:21:02.179027 systemd-journald[1121]: Received client request to flush runtime journal. Nov 8 00:21:02.179099 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:21:02.179202 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:21:02.179232 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:21:02.179252 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 00:21:02.179275 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:21:02.179294 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:21:02.179310 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:21:02.179329 zram_generator::config[1221]: No configuration found. Nov 8 00:21:01.688265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:21:01.725301 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:21:01.732145 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:21:01.735565 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:21:01.740026 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:21:01.763020 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:21:01.766371 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:21:01.774040 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Nov 8 00:21:01.774054 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Nov 8 00:21:01.779932 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:21:01.782048 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:21:01.792098 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:21:01.870252 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:21:01.879086 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:21:01.900377 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 8 00:21:01.900393 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 8 00:21:01.967986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:21:02.081405 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:21:02.082044 (sd-merge)[1195]: Merged extensions into '/usr'. Nov 8 00:21:02.086542 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:21:02.086555 systemd[1]: Reloading... Nov 8 00:21:02.301499 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:21:02.348489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:02.400984 systemd[1]: Reloading finished in 313 ms. Nov 8 00:21:02.441184 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:21:02.443500 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:21:02.445959 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:21:02.448505 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:21:02.460972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:21:02.477199 systemd[1]: Starting ensure-sysext.service... Nov 8 00:21:02.480227 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:21:02.483421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:21:02.581908 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:21:02.582333 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:21:02.583512 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:21:02.583844 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Nov 8 00:21:02.583931 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Nov 8 00:21:02.587379 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:21:02.587394 systemd-tmpfiles[1266]: Skipping /boot Nov 8 00:21:02.589367 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:21:02.589387 systemd[1]: Reloading... Nov 8 00:21:02.600962 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:21:02.600977 systemd-tmpfiles[1266]: Skipping /boot Nov 8 00:21:02.644134 zram_generator::config[1297]: No configuration found. Nov 8 00:21:02.778355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:02.828293 systemd[1]: Reloading finished in 238 ms. Nov 8 00:21:02.846693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:21:02.866286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:21:02.961976 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:21:02.965275 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:21:02.969901 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:21:02.974923 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:21:02.981699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:02.981901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:02.985020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:02.988231 augenrules[1353]: No rules Nov 8 00:21:02.989487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:21:02.995806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:21:02.997932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:02.998059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:02.999128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:21:03.001480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:03.001666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:03.004305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:21:03.004476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:21:03.007519 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:21:03.007801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:21:03.074512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:21:03.077387 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:21:03.090660 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:21:03.123093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:03.123601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:03.137065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:03.140085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:21:03.142773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:21:03.145721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:21:03.147587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:03.149729 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:21:03.151402 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:21:03.151507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:03.152853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:03.153054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:03.155424 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:21:03.155601 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:21:03.157940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:21:03.158139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:21:03.160722 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:21:03.160917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:21:03.165992 systemd[1]: Finished ensure-sysext.service. Nov 8 00:21:03.172454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:21:03.172525 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:21:03.179891 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:21:03.197345 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:21:03.333398 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:21:03.335605 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:21:03.341063 systemd-resolved[1347]: Positive Trust Anchors: Nov 8 00:21:03.341079 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:21:03.341119 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:21:03.345088 systemd-resolved[1347]: Defaulting to hostname 'linux'. Nov 8 00:21:03.346733 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:21:03.351729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:21:03.553524 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:21:03.568259 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:21:03.598888 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:21:03.618053 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Nov 8 00:21:03.672460 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:21:03.679743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:21:03.691981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:21:03.726740 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:21:03.741788 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1403) Nov 8 00:21:03.752578 systemd-networkd[1395]: lo: Link UP Nov 8 00:21:03.752592 systemd-networkd[1395]: lo: Gained carrier Nov 8 00:21:03.755397 systemd-networkd[1395]: Enumeration completed Nov 8 00:21:03.755485 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:21:03.756839 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:21:03.756843 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:21:03.757652 systemd-networkd[1395]: eth0: Link UP Nov 8 00:21:03.757656 systemd-networkd[1395]: eth0: Gained carrier Nov 8 00:21:03.757668 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:21:03.758264 systemd[1]: Reached target network.target - Network. Nov 8 00:21:03.766948 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:21:03.770883 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:21:03.771833 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Nov 8 00:21:03.772900 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:21:03.772956 systemd-timesyncd[1377]: Initial clock synchronization to Sat 2025-11-08 00:21:03.645127 UTC. Nov 8 00:21:03.778331 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:21:03.785777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:21:03.798996 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:21:03.842046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:03.852823 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:21:03.852909 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 8 00:21:03.855515 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:21:03.856019 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:21:03.857457 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:21:03.858644 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:21:03.877385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:21:03.882814 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:21:03.969812 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:21:04.016567 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:21:04.020392 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:21:04.039804 kernel: kvm_amd: TSC scaling supported Nov 8 00:21:04.039876 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:21:04.039902 kernel: kvm_amd: Nested Paging enabled Nov 8 00:21:04.039944 kernel: kvm_amd: LBR virtualization supported Nov 8 00:21:04.039967 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:21:04.039994 kernel: kvm_amd: Virtual GIF supported Nov 8 00:21:04.052877 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:04.067776 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:21:04.111315 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:21:04.124222 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:21:04.134994 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:21:04.163124 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:21:04.165708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:21:04.167714 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:21:04.169788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:21:04.172032 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:21:04.174507 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:21:04.176703 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:21:04.178980 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:21:04.181246 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:21:04.181285 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:21:04.182982 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:21:04.185726 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:21:04.189582 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:21:04.199712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:21:04.202962 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:21:04.206008 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:21:04.208416 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:21:04.210449 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:21:04.212297 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:21:04.212328 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:21:04.225096 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:21:04.228614 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:21:04.231246 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:21:04.232657 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:21:04.236982 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:21:04.239729 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:21:04.243570 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:21:04.250014 jq[1442]: false Nov 8 00:21:04.249740 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:21:04.252956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:21:04.257022 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:21:04.263158 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:21:04.264691 extend-filesystems[1443]: Found loop3 Nov 8 00:21:04.264691 extend-filesystems[1443]: Found loop4 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found loop5 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found sr0 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda1 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda2 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda3 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found usr Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda4 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda6 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda7 Nov 8 00:21:04.272971 extend-filesystems[1443]: Found vda9 Nov 8 00:21:04.272971 extend-filesystems[1443]: Checking size of /dev/vda9 Nov 8 00:21:04.265600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:21:04.279137 dbus-daemon[1441]: [system] SELinux support is enabled Nov 8 00:21:04.266940 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:21:04.267632 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:21:04.294699 jq[1457]: true Nov 8 00:21:04.273873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:21:04.277403 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:21:04.286887 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:21:04.293341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:21:04.293610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:21:04.295560 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:21:04.295862 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:21:04.299447 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:21:04.302014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:21:04.303103 update_engine[1455]: I20251108 00:21:04.303010 1455 main.cc:92] Flatcar Update Engine starting Nov 8 00:21:04.304638 update_engine[1455]: I20251108 00:21:04.304455 1455 update_check_scheduler.cc:74] Next update check in 10m51s Nov 8 00:21:04.315363 jq[1464]: true Nov 8 00:21:04.328910 tar[1463]: linux-amd64/LICENSE Nov 8 00:21:04.328910 tar[1463]: linux-amd64/helm Nov 8 00:21:04.327589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:21:04.327654 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:21:04.329156 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:21:04.329892 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:21:04.329917 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:21:04.332144 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:21:04.345015 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:21:04.349859 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:21:04.350224 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:21:04.354806 systemd-logind[1451]: New seat seat0. Nov 8 00:21:04.405734 extend-filesystems[1443]: Resized partition /dev/vda9 Nov 8 00:21:04.407165 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:21:04.421510 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:21:04.442240 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1404) Nov 8 00:21:04.522291 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:21:04.539729 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:21:04.561783 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:21:04.563250 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:21:04.564590 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:21:04.573499 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:21:04.588119 extend-filesystems[1494]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:21:04.588119 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:21:04.588119 extend-filesystems[1494]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:21:04.593819 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Nov 8 00:21:04.596215 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:21:04.596478 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:21:04.970017 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:21:04.998324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:21:05.006091 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:21:05.019085 containerd[1465]: time="2025-11-08T00:21:05.018172015Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:21:05.018125 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:21:05.018370 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:21:05.066144 containerd[1465]: time="2025-11-08T00:21:05.065914917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.067160 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:21:05.068312 containerd[1465]: time="2025-11-08T00:21:05.068267531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:05.068312 containerd[1465]: time="2025-11-08T00:21:05.068305506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:21:05.068365 containerd[1465]: time="2025-11-08T00:21:05.068323147Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:21:05.068583 containerd[1465]: time="2025-11-08T00:21:05.068556836Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:21:05.068583 containerd[1465]: time="2025-11-08T00:21:05.068581119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.068690 containerd[1465]: time="2025-11-08T00:21:05.068664309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:05.068690 containerd[1465]: time="2025-11-08T00:21:05.068682914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.068971 containerd[1465]: time="2025-11-08T00:21:05.068941166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:05.068971 containerd[1465]: time="2025-11-08T00:21:05.068963251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.069023 containerd[1465]: time="2025-11-08T00:21:05.068977481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:05.069023 containerd[1465]: time="2025-11-08T00:21:05.068989970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.069119 containerd[1465]: time="2025-11-08T00:21:05.069096588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.069425 containerd[1465]: time="2025-11-08T00:21:05.069397061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:05.069577 containerd[1465]: time="2025-11-08T00:21:05.069543972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:05.069577 containerd[1465]: time="2025-11-08T00:21:05.069570014Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:21:05.069695 containerd[1465]: time="2025-11-08T00:21:05.069677190Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:21:05.071816 containerd[1465]: time="2025-11-08T00:21:05.069741148Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:21:05.085940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:21:05.095053 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:21:05.098467 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:21:05.100520 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:21:05.195970 systemd-networkd[1395]: eth0: Gained IPv6LL Nov 8 00:21:05.200537 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:21:05.269403 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:21:05.277961 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:21:05.281186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:05.283824 tar[1463]: linux-amd64/README.md Nov 8 00:21:05.286825 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:21:05.300174 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:21:05.312251 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:21:05.316125 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:21:05.316408 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:21:05.319504 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:21:05.551227 containerd[1465]: time="2025-11-08T00:21:05.551084657Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:21:05.551227 containerd[1465]: time="2025-11-08T00:21:05.551224120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:21:05.551335 containerd[1465]: time="2025-11-08T00:21:05.551249675Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:21:05.551335 containerd[1465]: time="2025-11-08T00:21:05.551282082Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:21:05.551335 containerd[1465]: time="2025-11-08T00:21:05.551309467Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:21:05.551606 containerd[1465]: time="2025-11-08T00:21:05.551569429Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:21:05.552366 containerd[1465]: time="2025-11-08T00:21:05.552323839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:21:05.552667 containerd[1465]: time="2025-11-08T00:21:05.552636214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:21:05.552667 containerd[1465]: time="2025-11-08T00:21:05.552657216Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:21:05.552717 containerd[1465]: time="2025-11-08T00:21:05.552672460Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:21:05.552773 containerd[1465]: time="2025-11-08T00:21:05.552739015Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552797 containerd[1465]: time="2025-11-08T00:21:05.552784567Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552816 containerd[1465]: time="2025-11-08T00:21:05.552809247Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552835 containerd[1465]: time="2025-11-08T00:21:05.552826987Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552879 containerd[1465]: time="2025-11-08T00:21:05.552852463Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552879 containerd[1465]: time="2025-11-08T00:21:05.552875563Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552930 containerd[1465]: time="2025-11-08T00:21:05.552892845Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552930 containerd[1465]: time="2025-11-08T00:21:05.552906807Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:21:05.552966 containerd[1465]: time="2025-11-08T00:21:05.552952260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.552995 containerd[1465]: time="2025-11-08T00:21:05.552971630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.552995 containerd[1465]: time="2025-11-08T00:21:05.552986914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553043 containerd[1465]: time="2025-11-08T00:21:05.553002019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553043 containerd[1465]: time="2025-11-08T00:21:05.553017104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553043 containerd[1465]: time="2025-11-08T00:21:05.553031094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553099 containerd[1465]: time="2025-11-08T00:21:05.553044747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553099 containerd[1465]: time="2025-11-08T00:21:05.553066415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553099 containerd[1465]: time="2025-11-08T00:21:05.553079242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553099657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553113042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553134550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553148830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553169801Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553202098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553213 containerd[1465]: time="2025-11-08T00:21:05.553214787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553343 containerd[1465]: time="2025-11-08T00:21:05.553229713Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:21:05.553343 containerd[1465]: time="2025-11-08T00:21:05.553321504Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:21:05.553379 containerd[1465]: time="2025-11-08T00:21:05.553347468Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:21:05.553379 containerd[1465]: time="2025-11-08T00:21:05.553362215Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:21:05.553422 containerd[1465]: time="2025-11-08T00:21:05.553379258Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:21:05.553422 containerd[1465]: time="2025-11-08T00:21:05.553390863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553422 containerd[1465]: time="2025-11-08T00:21:05.553409487Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:21:05.553483 containerd[1465]: time="2025-11-08T00:21:05.553426154Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:21:05.553483 containerd[1465]: time="2025-11-08T00:21:05.553438206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:21:05.553909 containerd[1465]: time="2025-11-08T00:21:05.553835880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:21:05.553909 containerd[1465]: time="2025-11-08T00:21:05.553907117Z" level=info msg="Connect containerd service" Nov 8 00:21:05.554059 containerd[1465]: time="2025-11-08T00:21:05.553954918Z" level=info msg="using legacy CRI server" Nov 8 00:21:05.554059 containerd[1465]: time="2025-11-08T00:21:05.553977569Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:21:05.554213 containerd[1465]: time="2025-11-08T00:21:05.554179588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:21:05.555361 containerd[1465]: time="2025-11-08T00:21:05.555327356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:21:05.555573 containerd[1465]: time="2025-11-08T00:21:05.555503660Z" level=info msg="Start subscribing containerd event" Nov 8 00:21:05.555704 containerd[1465]: time="2025-11-08T00:21:05.555601478Z" level=info msg="Start recovering state" Nov 8 00:21:05.555852 containerd[1465]: time="2025-11-08T00:21:05.555809225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:21:05.555888 containerd[1465]: time="2025-11-08T00:21:05.555809782Z" level=info msg="Start event monitor" Nov 8 00:21:05.556352 containerd[1465]: time="2025-11-08T00:21:05.555910981Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:21:05.556352 containerd[1465]: time="2025-11-08T00:21:05.555914829Z" level=info msg="Start snapshots syncer" Nov 8 00:21:05.556352 containerd[1465]: time="2025-11-08T00:21:05.556236790Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:21:05.556352 containerd[1465]: time="2025-11-08T00:21:05.556256470Z" level=info msg="Start streaming server" Nov 8 00:21:05.557565 containerd[1465]: time="2025-11-08T00:21:05.556499537Z" level=info msg="containerd successfully booted in 0.539431s" Nov 8 00:21:05.556610 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:21:05.795261 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:21:05.806160 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:37426.service - OpenSSH per-connection server daemon (10.0.0.1:37426). Nov 8 00:21:05.846166 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 37426 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:05.848614 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:05.858447 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:21:05.872960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:21:05.877209 systemd-logind[1451]: New session 1 of user core. Nov 8 00:21:05.891203 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:21:06.023341 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:21:06.028429 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:21:06.170553 systemd[1554]: Queued start job for default target default.target. Nov 8 00:21:06.332538 systemd[1554]: Created slice app.slice - User Application Slice. Nov 8 00:21:06.332572 systemd[1554]: Reached target paths.target - Paths. Nov 8 00:21:06.332588 systemd[1554]: Reached target timers.target - Timers. Nov 8 00:21:06.334553 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:21:06.362139 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:21:06.362292 systemd[1554]: Reached target sockets.target - Sockets. Nov 8 00:21:06.362311 systemd[1554]: Reached target basic.target - Basic System. Nov 8 00:21:06.362359 systemd[1554]: Reached target default.target - Main User Target. Nov 8 00:21:06.362398 systemd[1554]: Startup finished in 324ms. Nov 8 00:21:06.362892 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:21:06.374890 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:21:06.444399 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:37428.service - OpenSSH per-connection server daemon (10.0.0.1:37428). Nov 8 00:21:06.481145 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 37428 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:06.483549 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:06.553431 systemd-logind[1451]: New session 2 of user core. Nov 8 00:21:06.566896 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:21:06.625449 sshd[1565]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:06.637525 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:37428.service: Deactivated successfully. Nov 8 00:21:06.639525 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:21:06.641361 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:21:06.647016 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:37432.service - OpenSSH per-connection server daemon (10.0.0.1:37432). Nov 8 00:21:06.650241 systemd-logind[1451]: Removed session 2. Nov 8 00:21:06.678527 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 37432 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:06.680508 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:06.686450 systemd-logind[1451]: New session 3 of user core. Nov 8 00:21:06.701941 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:21:06.859090 sshd[1572]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:06.862954 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:37432.service: Deactivated successfully. Nov 8 00:21:06.864843 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:21:06.865354 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:21:06.866196 systemd-logind[1451]: Removed session 3. Nov 8 00:21:07.008850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:07.011489 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:21:07.013672 systemd[1]: Startup finished in 1.288s (kernel) + 8.718s (initrd) + 6.611s (userspace) = 16.617s. Nov 8 00:21:07.017525 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:21:07.522078 kubelet[1583]: E1108 00:21:07.522004 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:21:07.526343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:21:07.526567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:21:07.526960 systemd[1]: kubelet.service: Consumed 1.921s CPU time. Nov 8 00:21:16.725164 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:46332.service - OpenSSH per-connection server daemon (10.0.0.1:46332). Nov 8 00:21:16.766085 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 46332 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:16.767971 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.771940 systemd-logind[1451]: New session 4 of user core. Nov 8 00:21:16.782883 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:21:16.837454 sshd[1596]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:16.853497 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:46332.service: Deactivated successfully. Nov 8 00:21:16.855385 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:21:16.857087 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:21:16.871084 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:46346.service - OpenSSH per-connection server daemon (10.0.0.1:46346). Nov 8 00:21:16.872021 systemd-logind[1451]: Removed session 4. Nov 8 00:21:16.901939 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 46346 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:16.903632 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.907502 systemd-logind[1451]: New session 5 of user core. Nov 8 00:21:16.916877 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:21:16.966526 sshd[1603]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:16.974561 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:46346.service: Deactivated successfully. Nov 8 00:21:16.976345 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:21:16.978102 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:21:16.993010 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Nov 8 00:21:16.993879 systemd-logind[1451]: Removed session 5. Nov 8 00:21:17.023022 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:17.024572 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:17.028298 systemd-logind[1451]: New session 6 of user core. Nov 8 00:21:17.037870 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:21:17.091782 sshd[1610]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:17.111471 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:46358.service: Deactivated successfully. Nov 8 00:21:17.113301 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:21:17.115021 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:21:17.116458 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:46368.service - OpenSSH per-connection server daemon (10.0.0.1:46368). Nov 8 00:21:17.117167 systemd-logind[1451]: Removed session 6. Nov 8 00:21:17.153668 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 46368 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:17.155514 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:17.159852 systemd-logind[1451]: New session 7 of user core. Nov 8 00:21:17.166890 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:21:17.229419 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:21:17.229810 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:17.248324 sudo[1620]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:17.250519 sshd[1617]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:17.274585 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:46368.service: Deactivated successfully. Nov 8 00:21:17.276446 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:21:17.278097 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:21:17.279483 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:46372.service - OpenSSH per-connection server daemon (10.0.0.1:46372). Nov 8 00:21:17.280278 systemd-logind[1451]: Removed session 7. Nov 8 00:21:17.314397 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 46372 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:17.316125 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:17.319942 systemd-logind[1451]: New session 8 of user core. Nov 8 00:21:17.334862 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:21:17.389619 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:21:17.390014 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:17.394834 sudo[1629]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:17.401917 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:21:17.402262 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:17.426015 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:21:17.427804 auditctl[1632]: No rules Nov 8 00:21:17.429241 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:21:17.429555 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:21:17.431572 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:21:17.480496 augenrules[1650]: No rules Nov 8 00:21:17.481416 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:21:17.482858 sudo[1628]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:17.484621 sshd[1625]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:17.491855 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:46372.service: Deactivated successfully. Nov 8 00:21:17.493802 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:21:17.495362 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:21:17.503053 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:46388.service - OpenSSH per-connection server daemon (10.0.0.1:46388). Nov 8 00:21:17.503921 systemd-logind[1451]: Removed session 8. Nov 8 00:21:17.533097 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 46388 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:17.534728 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:17.535673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:21:17.542927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:17.545114 systemd-logind[1451]: New session 9 of user core. Nov 8 00:21:17.547924 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:21:17.602143 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:21:17.602517 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:17.814129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:17.820123 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:21:18.051964 kubelet[1680]: E1108 00:21:18.051903 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:21:18.059780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:21:18.060053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:21:18.175985 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:21:18.176225 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:21:18.829619 dockerd[1695]: time="2025-11-08T00:21:18.829516079Z" level=info msg="Starting up" Nov 8 00:21:19.466429 dockerd[1695]: time="2025-11-08T00:21:19.466363255Z" level=info msg="Loading containers: start." Nov 8 00:21:19.596800 kernel: Initializing XFRM netlink socket Nov 8 00:21:19.681356 systemd-networkd[1395]: docker0: Link UP Nov 8 00:21:19.707820 dockerd[1695]: time="2025-11-08T00:21:19.707735331Z" level=info msg="Loading containers: done." Nov 8 00:21:19.727027 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1194550173-merged.mount: Deactivated successfully. Nov 8 00:21:19.728509 dockerd[1695]: time="2025-11-08T00:21:19.728462977Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:21:19.728607 dockerd[1695]: time="2025-11-08T00:21:19.728590310Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:21:19.728737 dockerd[1695]: time="2025-11-08T00:21:19.728717212Z" level=info msg="Daemon has completed initialization" Nov 8 00:21:19.773328 dockerd[1695]: time="2025-11-08T00:21:19.773228763Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:21:19.773515 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:21:20.716467 containerd[1465]: time="2025-11-08T00:21:20.716421206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:21:21.436919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795794096.mount: Deactivated successfully. Nov 8 00:21:22.853656 containerd[1465]: time="2025-11-08T00:21:22.853573864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:22.854281 containerd[1465]: time="2025-11-08T00:21:22.854208568Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:21:22.855365 containerd[1465]: time="2025-11-08T00:21:22.855331758Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:22.858171 containerd[1465]: time="2025-11-08T00:21:22.858147882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:22.859309 containerd[1465]: time="2025-11-08T00:21:22.859274339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.142811489s" Nov 8 00:21:22.859349 containerd[1465]: time="2025-11-08T00:21:22.859313754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:21:22.859876 containerd[1465]: time="2025-11-08T00:21:22.859855876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:21:24.295026 containerd[1465]: time="2025-11-08T00:21:24.294954502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:24.295612 containerd[1465]: time="2025-11-08T00:21:24.295549022Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:21:24.296883 containerd[1465]: time="2025-11-08T00:21:24.296849345Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:24.300520 containerd[1465]: time="2025-11-08T00:21:24.300474993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:24.301687 containerd[1465]: time="2025-11-08T00:21:24.301632922Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.441750729s" Nov 8 00:21:24.301687 containerd[1465]: time="2025-11-08T00:21:24.301685336Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:21:24.302175 containerd[1465]: time="2025-11-08T00:21:24.302154875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:21:25.839181 containerd[1465]: time="2025-11-08T00:21:25.839118467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:25.840726 containerd[1465]: time="2025-11-08T00:21:25.840651677Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:21:25.841917 containerd[1465]: time="2025-11-08T00:21:25.841872673Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:25.846737 containerd[1465]: time="2025-11-08T00:21:25.846707079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:25.847947 containerd[1465]: time="2025-11-08T00:21:25.847902540Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.545718103s" Nov 8 00:21:25.847947 containerd[1465]: time="2025-11-08T00:21:25.847939833Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:21:25.848527 containerd[1465]: time="2025-11-08T00:21:25.848487863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:21:27.161018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720472949.mount: Deactivated successfully. Nov 8 00:21:28.208103 containerd[1465]: time="2025-11-08T00:21:28.207987043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.209059 containerd[1465]: time="2025-11-08T00:21:28.208937115Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:21:28.210390 containerd[1465]: time="2025-11-08T00:21:28.210343908Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.213203 containerd[1465]: time="2025-11-08T00:21:28.213152673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.214167 containerd[1465]: time="2025-11-08T00:21:28.214112856Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.365583267s" Nov 8 00:21:28.214167 containerd[1465]: time="2025-11-08T00:21:28.214149522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:21:28.214857 containerd[1465]: time="2025-11-08T00:21:28.214818523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:21:28.258627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:21:28.273064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:28.466021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:28.471231 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:21:28.674376 kubelet[1927]: E1108 00:21:28.674300 1927 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:21:28.678849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:21:28.679120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:21:29.154735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776979765.mount: Deactivated successfully. Nov 8 00:21:30.144561 containerd[1465]: time="2025-11-08T00:21:30.144479851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:30.145265 containerd[1465]: time="2025-11-08T00:21:30.145157972Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:21:30.146438 containerd[1465]: time="2025-11-08T00:21:30.146398651Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:30.149839 containerd[1465]: time="2025-11-08T00:21:30.149744790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:30.151038 containerd[1465]: time="2025-11-08T00:21:30.150991682Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.936141181s" Nov 8 00:21:30.151038 containerd[1465]: time="2025-11-08T00:21:30.151037179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:21:30.151617 containerd[1465]: time="2025-11-08T00:21:30.151581329Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:21:30.598429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408423979.mount: Deactivated successfully. Nov 8 00:21:30.604708 containerd[1465]: time="2025-11-08T00:21:30.604647021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:30.605351 containerd[1465]: time="2025-11-08T00:21:30.605304144Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:21:30.606474 containerd[1465]: time="2025-11-08T00:21:30.606440482Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:30.608593 containerd[1465]: time="2025-11-08T00:21:30.608559268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:30.609293 containerd[1465]: time="2025-11-08T00:21:30.609256987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.645827ms" Nov 8 00:21:30.609346 containerd[1465]: time="2025-11-08T00:21:30.609299124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:21:30.609857 containerd[1465]: time="2025-11-08T00:21:30.609824817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:21:31.239743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002207591.mount: Deactivated successfully. Nov 8 00:21:34.280084 containerd[1465]: time="2025-11-08T00:21:34.279988442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:34.281391 containerd[1465]: time="2025-11-08T00:21:34.281342793Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:21:34.283199 containerd[1465]: time="2025-11-08T00:21:34.283147651Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:34.287509 containerd[1465]: time="2025-11-08T00:21:34.287453262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:34.288906 containerd[1465]: time="2025-11-08T00:21:34.288859001Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.679002108s" Nov 8 00:21:34.288906 containerd[1465]: time="2025-11-08T00:21:34.288902793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:21:37.022069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:37.039968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:37.064667 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-9.scope)... Nov 8 00:21:37.064685 systemd[1]: Reloading... Nov 8 00:21:37.151794 zram_generator::config[2116]: No configuration found. Nov 8 00:21:37.469711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:37.548015 systemd[1]: Reloading finished in 482 ms. Nov 8 00:21:37.607331 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:21:37.607460 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:21:37.607945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:37.610773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:37.779905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:37.785088 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:21:37.828788 kubelet[2165]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:37.828788 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:21:37.828788 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:37.828788 kubelet[2165]: I1108 00:21:37.826382 2165 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:21:38.074014 kubelet[2165]: I1108 00:21:38.073867 2165 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:21:38.074014 kubelet[2165]: I1108 00:21:38.073901 2165 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:21:38.074250 kubelet[2165]: I1108 00:21:38.074222 2165 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:21:38.098282 kubelet[2165]: E1108 00:21:38.098224 2165 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:38.098830 kubelet[2165]: I1108 00:21:38.098793 2165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:21:38.107586 kubelet[2165]: E1108 00:21:38.107555 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:21:38.107586 kubelet[2165]: I1108 00:21:38.107584 2165 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:21:38.112649 kubelet[2165]: I1108 00:21:38.112620 2165 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:21:38.112914 kubelet[2165]: I1108 00:21:38.112872 2165 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:21:38.113077 kubelet[2165]: I1108 00:21:38.112901 2165 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:21:38.113542 kubelet[2165]: I1108 00:21:38.113515 2165 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:21:38.113542 kubelet[2165]: I1108 00:21:38.113532 2165 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:21:38.113713 kubelet[2165]: I1108 00:21:38.113688 2165 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:38.116616 kubelet[2165]: I1108 00:21:38.116585 2165 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:21:38.116616 kubelet[2165]: I1108 00:21:38.116618 2165 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:21:38.116784 kubelet[2165]: I1108 00:21:38.116635 2165 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:21:38.116784 kubelet[2165]: I1108 00:21:38.116646 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:21:38.118516 kubelet[2165]: W1108 00:21:38.118461 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:38.118591 kubelet[2165]: E1108 00:21:38.118545 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:38.118700 kubelet[2165]: W1108 00:21:38.118490 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:38.118700 kubelet[2165]: E1108 00:21:38.118673 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:38.119423 kubelet[2165]: I1108 00:21:38.119404 2165 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:21:38.119786 kubelet[2165]: I1108 00:21:38.119768 2165 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:21:38.119844 kubelet[2165]: W1108 00:21:38.119830 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:21:38.122021 kubelet[2165]: I1108 00:21:38.121992 2165 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:21:38.122087 kubelet[2165]: I1108 00:21:38.122039 2165 server.go:1287] "Started kubelet" Nov 8 00:21:38.124481 kubelet[2165]: I1108 00:21:38.124451 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:21:38.126796 kubelet[2165]: I1108 00:21:38.124778 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:21:38.126796 kubelet[2165]: I1108 00:21:38.125475 2165 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:21:38.126796 kubelet[2165]: I1108 00:21:38.125741 2165 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:21:38.126796 kubelet[2165]: I1108 00:21:38.125822 2165 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:21:38.126796 kubelet[2165]: I1108 00:21:38.126745 2165 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:21:38.127724 kubelet[2165]: E1108 00:21:38.127698 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:38.127794 kubelet[2165]: I1108 00:21:38.127736 2165 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:21:38.128078 kubelet[2165]: I1108 00:21:38.127910 2165 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:21:38.128078 kubelet[2165]: I1108 00:21:38.127961 2165 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:21:38.128245 kubelet[2165]: W1108 00:21:38.128209 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:38.128282 kubelet[2165]: E1108 00:21:38.128251 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:38.128571 kubelet[2165]: I1108 00:21:38.128531 2165 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:21:38.128677 kubelet[2165]: I1108 00:21:38.128600 2165 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:21:38.129718 kubelet[2165]: E1108 00:21:38.128745 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Nov 8 00:21:38.129718 kubelet[2165]: E1108 00:21:38.127949 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0298569ea0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:21:38.122009101 +0000 UTC m=+0.332861737,LastTimestamp:2025-11-08 00:21:38.122009101 +0000 UTC m=+0.332861737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:21:38.130335 kubelet[2165]: I1108 00:21:38.130311 2165 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:21:38.130583 kubelet[2165]: E1108 00:21:38.130552 2165 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:21:38.147708 kubelet[2165]: I1108 00:21:38.147634 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:21:38.149814 kubelet[2165]: I1108 00:21:38.149042 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:21:38.149814 kubelet[2165]: I1108 00:21:38.149069 2165 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:21:38.149814 kubelet[2165]: I1108 00:21:38.149099 2165 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:21:38.149814 kubelet[2165]: I1108 00:21:38.149107 2165 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:21:38.149814 kubelet[2165]: E1108 00:21:38.149157 2165 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:21:38.151018 kubelet[2165]: I1108 00:21:38.150991 2165 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:21:38.151018 kubelet[2165]: I1108 00:21:38.151007 2165 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:21:38.151103 kubelet[2165]: I1108 00:21:38.151044 2165 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:38.151253 kubelet[2165]: W1108 00:21:38.151205 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:38.151293 kubelet[2165]: E1108 00:21:38.151262 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:38.228603 kubelet[2165]: E1108 00:21:38.228555 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:38.249819 kubelet[2165]: E1108 00:21:38.249782 2165 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:21:38.329110 kubelet[2165]: E1108 00:21:38.328998 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:38.329236 kubelet[2165]: E1108 00:21:38.329219 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Nov 8 00:21:38.429552 kubelet[2165]: E1108 00:21:38.429491 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:38.450748 kubelet[2165]: E1108 00:21:38.450686 2165 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:21:38.530126 kubelet[2165]: E1108 00:21:38.530076 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:38.573157 kubelet[2165]: I1108 00:21:38.573096 2165 policy_none.go:49] "None policy: Start" Nov 8 00:21:38.573157 kubelet[2165]: I1108 00:21:38.573147 2165 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:21:38.573157 kubelet[2165]: I1108 00:21:38.573167 2165 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:21:38.596816 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:21:38.613396 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:21:38.626139 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:21:38.627461 kubelet[2165]: I1108 00:21:38.627425 2165 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:21:38.627745 kubelet[2165]: I1108 00:21:38.627725 2165 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:21:38.627811 kubelet[2165]: I1108 00:21:38.627742 2165 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:21:38.628104 kubelet[2165]: I1108 00:21:38.628002 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:21:38.628941 kubelet[2165]: E1108 00:21:38.628909 2165 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:21:38.629038 kubelet[2165]: E1108 00:21:38.628952 2165 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:21:38.638799 kubelet[2165]: E1108 00:21:38.638667 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0298569ea0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:21:38.122009101 +0000 UTC m=+0.332861737,LastTimestamp:2025-11-08 00:21:38.122009101 +0000 UTC m=+0.332861737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:21:38.729555 kubelet[2165]: I1108 00:21:38.729482 2165 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:38.729795 kubelet[2165]: E1108 00:21:38.729734 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Nov 8 00:21:38.729840 kubelet[2165]: E1108 00:21:38.729807 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Nov 8 00:21:38.860875 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 8 00:21:38.884171 kubelet[2165]: E1108 00:21:38.884125 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:38.885888 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 8 00:21:38.888018 kubelet[2165]: E1108 00:21:38.887970 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:38.890352 systemd[1]: Created slice kubepods-burstable-poda376c2280220d0a24341b66320a1a995.slice - libcontainer container kubepods-burstable-poda376c2280220d0a24341b66320a1a995.slice. Nov 8 00:21:38.892147 kubelet[2165]: E1108 00:21:38.892111 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:38.931869 kubelet[2165]: I1108 00:21:38.931817 2165 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:38.932302 kubelet[2165]: E1108 00:21:38.932259 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Nov 8 00:21:38.933354 kubelet[2165]: I1108 00:21:38.933320 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a376c2280220d0a24341b66320a1a995-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a376c2280220d0a24341b66320a1a995\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:38.933354 kubelet[2165]: I1108 00:21:38.933350 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a376c2280220d0a24341b66320a1a995-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a376c2280220d0a24341b66320a1a995\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:38.933466 kubelet[2165]: I1108 00:21:38.933390 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:38.933466 kubelet[2165]: I1108 00:21:38.933415 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:38.933466 kubelet[2165]: I1108 00:21:38.933445 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a376c2280220d0a24341b66320a1a995-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a376c2280220d0a24341b66320a1a995\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:38.933466 kubelet[2165]: I1108 00:21:38.933464 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:38.933590 kubelet[2165]: I1108 00:21:38.933484 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:38.933590 kubelet[2165]: I1108 00:21:38.933526 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:38.933590 kubelet[2165]: I1108 00:21:38.933544 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:39.139815 kubelet[2165]: W1108 00:21:39.139565 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:39.139815 kubelet[2165]: E1108 00:21:39.139681 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:39.185890 kubelet[2165]: E1108 00:21:39.185827 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:39.186868 containerd[1465]: time="2025-11-08T00:21:39.186821655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:39.189091 kubelet[2165]: E1108 00:21:39.189061 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:39.189737 containerd[1465]: time="2025-11-08T00:21:39.189688272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:39.193009 kubelet[2165]: E1108 00:21:39.192977 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:39.193597 containerd[1465]: time="2025-11-08T00:21:39.193556175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a376c2280220d0a24341b66320a1a995,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:39.226305 kubelet[2165]: W1108 00:21:39.226218 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:39.226305 kubelet[2165]: E1108 00:21:39.226301 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:39.334896 kubelet[2165]: I1108 00:21:39.334843 2165 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:39.335252 kubelet[2165]: E1108 00:21:39.335213 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Nov 8 00:21:39.410285 kubelet[2165]: W1108 00:21:39.410105 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:39.410285 kubelet[2165]: E1108 00:21:39.410190 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:39.459614 kubelet[2165]: W1108 00:21:39.459567 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:39.459614 kubelet[2165]: E1108 00:21:39.459614 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:39.530734 kubelet[2165]: E1108 00:21:39.530669 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Nov 8 00:21:40.136956 kubelet[2165]: I1108 00:21:40.136901 2165 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:40.137475 kubelet[2165]: E1108 00:21:40.137404 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Nov 8 00:21:40.241142 kubelet[2165]: E1108 00:21:40.241079 2165 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:40.917527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156623763.mount: Deactivated successfully. Nov 8 00:21:40.925976 containerd[1465]: time="2025-11-08T00:21:40.925892022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:40.926820 containerd[1465]: time="2025-11-08T00:21:40.926770163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:21:40.927927 containerd[1465]: time="2025-11-08T00:21:40.927896746Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:40.928768 containerd[1465]: time="2025-11-08T00:21:40.928660231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:21:40.929876 containerd[1465]: time="2025-11-08T00:21:40.929831473Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:40.930840 containerd[1465]: time="2025-11-08T00:21:40.930743181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:21:40.931742 containerd[1465]: time="2025-11-08T00:21:40.931676051Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:40.937303 containerd[1465]: time="2025-11-08T00:21:40.937230231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:40.938355 containerd[1465]: time="2025-11-08T00:21:40.938300402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.751374758s" Nov 8 00:21:40.939144 containerd[1465]: time="2025-11-08T00:21:40.939098185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.749283019s" Nov 8 00:21:40.942673 containerd[1465]: time="2025-11-08T00:21:40.942608916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.748946981s" Nov 8 00:21:41.087482 containerd[1465]: time="2025-11-08T00:21:41.087205130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:41.087482 containerd[1465]: time="2025-11-08T00:21:41.087254107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:41.087482 containerd[1465]: time="2025-11-08T00:21:41.087264619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:41.087482 containerd[1465]: time="2025-11-08T00:21:41.087339565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:41.087833 containerd[1465]: time="2025-11-08T00:21:41.087690115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:41.087833 containerd[1465]: time="2025-11-08T00:21:41.087799030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:41.087833 containerd[1465]: time="2025-11-08T00:21:41.087811184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:41.088019 containerd[1465]: time="2025-11-08T00:21:41.087994694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:41.096784 containerd[1465]: time="2025-11-08T00:21:41.095085590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:41.096784 containerd[1465]: time="2025-11-08T00:21:41.095157502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:41.096784 containerd[1465]: time="2025-11-08T00:21:41.095175934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:41.096784 containerd[1465]: time="2025-11-08T00:21:41.095271493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:41.132517 kubelet[2165]: E1108 00:21:41.132458 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="3.2s" Nov 8 00:21:41.144953 systemd[1]: Started cri-containerd-613bd25affeedc1d28f0233d1f1b10dff537d2fb4b1640bf72ba4ca2f735cfc5.scope - libcontainer container 613bd25affeedc1d28f0233d1f1b10dff537d2fb4b1640bf72ba4ca2f735cfc5. Nov 8 00:21:41.151221 systemd[1]: Started cri-containerd-7a9dc0ae28a95be2e64feb566c3bad31c0c9d424e3b353ac19516929862f6db3.scope - libcontainer container 7a9dc0ae28a95be2e64feb566c3bad31c0c9d424e3b353ac19516929862f6db3. Nov 8 00:21:41.153669 systemd[1]: Started cri-containerd-b9547a751091a28ecd4d1672c9f3c6b907ca3fcf0d217cf8a72044662aa0fd07.scope - libcontainer container b9547a751091a28ecd4d1672c9f3c6b907ca3fcf0d217cf8a72044662aa0fd07. Nov 8 00:21:41.200605 containerd[1465]: time="2025-11-08T00:21:41.200464540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a376c2280220d0a24341b66320a1a995,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a9dc0ae28a95be2e64feb566c3bad31c0c9d424e3b353ac19516929862f6db3\"" Nov 8 00:21:41.204396 kubelet[2165]: E1108 00:21:41.204362 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:41.212376 containerd[1465]: time="2025-11-08T00:21:41.212334520Z" level=info msg="CreateContainer within sandbox \"7a9dc0ae28a95be2e64feb566c3bad31c0c9d424e3b353ac19516929862f6db3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:21:41.217429 containerd[1465]: time="2025-11-08T00:21:41.217371508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"613bd25affeedc1d28f0233d1f1b10dff537d2fb4b1640bf72ba4ca2f735cfc5\"" Nov 8 00:21:41.218504 kubelet[2165]: E1108 00:21:41.218457 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:41.220505 containerd[1465]: time="2025-11-08T00:21:41.220384693Z" level=info msg="CreateContainer within sandbox \"613bd25affeedc1d28f0233d1f1b10dff537d2fb4b1640bf72ba4ca2f735cfc5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:21:41.224596 containerd[1465]: time="2025-11-08T00:21:41.224569167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9547a751091a28ecd4d1672c9f3c6b907ca3fcf0d217cf8a72044662aa0fd07\"" Nov 8 00:21:41.225100 kubelet[2165]: E1108 00:21:41.225064 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:41.226380 containerd[1465]: time="2025-11-08T00:21:41.226356571Z" level=info msg="CreateContainer within sandbox \"b9547a751091a28ecd4d1672c9f3c6b907ca3fcf0d217cf8a72044662aa0fd07\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:21:41.266385 kubelet[2165]: W1108 00:21:41.266340 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:41.266385 kubelet[2165]: E1108 00:21:41.266377 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:41.335229 kubelet[2165]: W1108 00:21:41.335201 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Nov 8 00:21:41.335327 kubelet[2165]: E1108 00:21:41.335234 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:41.380015 containerd[1465]: time="2025-11-08T00:21:41.379953965Z" level=info msg="CreateContainer within sandbox \"613bd25affeedc1d28f0233d1f1b10dff537d2fb4b1640bf72ba4ca2f735cfc5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c703f7d6246624ea0cd34fc057bceaff76c4f302ebb1c7feafdacc1e511cc4f6\"" Nov 8 00:21:41.380668 containerd[1465]: time="2025-11-08T00:21:41.380641130Z" level=info msg="StartContainer for \"c703f7d6246624ea0cd34fc057bceaff76c4f302ebb1c7feafdacc1e511cc4f6\"" Nov 8 00:21:41.384002 containerd[1465]: time="2025-11-08T00:21:41.383957882Z" level=info msg="CreateContainer within sandbox \"7a9dc0ae28a95be2e64feb566c3bad31c0c9d424e3b353ac19516929862f6db3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b871cf94e7a39a9e09a8d67c7cd8c5789c8b0c21baa1ab0551af126b2a44409\"" Nov 8 00:21:41.384434 containerd[1465]: time="2025-11-08T00:21:41.384401419Z" level=info msg="StartContainer for \"2b871cf94e7a39a9e09a8d67c7cd8c5789c8b0c21baa1ab0551af126b2a44409\"" Nov 8 00:21:41.385700 containerd[1465]: time="2025-11-08T00:21:41.385611623Z" level=info msg="CreateContainer within sandbox \"b9547a751091a28ecd4d1672c9f3c6b907ca3fcf0d217cf8a72044662aa0fd07\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"deb43f51069fb099b5a4f641ad0874cab534f18d3642f67486c9c7968a05b081\"" Nov 8 00:21:41.386221 containerd[1465]: time="2025-11-08T00:21:41.386196682Z" level=info msg="StartContainer for \"deb43f51069fb099b5a4f641ad0874cab534f18d3642f67486c9c7968a05b081\"" Nov 8 00:21:41.417888 systemd[1]: Started cri-containerd-c703f7d6246624ea0cd34fc057bceaff76c4f302ebb1c7feafdacc1e511cc4f6.scope - libcontainer container c703f7d6246624ea0cd34fc057bceaff76c4f302ebb1c7feafdacc1e511cc4f6. Nov 8 00:21:41.421942 systemd[1]: Started cri-containerd-2b871cf94e7a39a9e09a8d67c7cd8c5789c8b0c21baa1ab0551af126b2a44409.scope - libcontainer container 2b871cf94e7a39a9e09a8d67c7cd8c5789c8b0c21baa1ab0551af126b2a44409. Nov 8 00:21:41.423978 systemd[1]: Started cri-containerd-deb43f51069fb099b5a4f641ad0874cab534f18d3642f67486c9c7968a05b081.scope - libcontainer container deb43f51069fb099b5a4f641ad0874cab534f18d3642f67486c9c7968a05b081. Nov 8 00:21:41.551909 containerd[1465]: time="2025-11-08T00:21:41.551822096Z" level=info msg="StartContainer for \"c703f7d6246624ea0cd34fc057bceaff76c4f302ebb1c7feafdacc1e511cc4f6\" returns successfully" Nov 8 00:21:41.551909 containerd[1465]: time="2025-11-08T00:21:41.551858627Z" level=info msg="StartContainer for \"2b871cf94e7a39a9e09a8d67c7cd8c5789c8b0c21baa1ab0551af126b2a44409\" returns successfully" Nov 8 00:21:41.552109 containerd[1465]: time="2025-11-08T00:21:41.551877119Z" level=info msg="StartContainer for \"deb43f51069fb099b5a4f641ad0874cab534f18d3642f67486c9c7968a05b081\" returns successfully" Nov 8 00:21:41.738809 kubelet[2165]: I1108 00:21:41.738718 2165 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:42.162806 kubelet[2165]: E1108 00:21:42.162734 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:42.162956 kubelet[2165]: E1108 00:21:42.162880 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:42.164930 kubelet[2165]: E1108 00:21:42.164904 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:42.165014 kubelet[2165]: E1108 00:21:42.164992 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:42.167558 kubelet[2165]: E1108 00:21:42.167534 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:42.167645 kubelet[2165]: E1108 00:21:42.167622 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.038706 kubelet[2165]: I1108 00:21:43.038646 2165 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:21:43.038706 kubelet[2165]: E1108 00:21:43.038706 2165 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:21:43.049034 kubelet[2165]: E1108 00:21:43.048971 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.149527 kubelet[2165]: E1108 00:21:43.149472 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.168666 kubelet[2165]: E1108 00:21:43.168611 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:43.168844 kubelet[2165]: E1108 00:21:43.168719 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.168844 kubelet[2165]: E1108 00:21:43.168824 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:43.168940 kubelet[2165]: E1108 00:21:43.168927 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.199348 kubelet[2165]: E1108 00:21:43.199319 2165 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:43.199432 kubelet[2165]: E1108 00:21:43.199418 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.249813 kubelet[2165]: E1108 00:21:43.249723 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.350606 kubelet[2165]: E1108 00:21:43.350421 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.451312 kubelet[2165]: E1108 00:21:43.451220 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.552102 kubelet[2165]: E1108 00:21:43.552038 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.653234 kubelet[2165]: E1108 00:21:43.653064 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.753839 kubelet[2165]: E1108 00:21:43.753741 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.854533 kubelet[2165]: E1108 00:21:43.854461 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:43.955359 kubelet[2165]: E1108 00:21:43.955192 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.055860 kubelet[2165]: E1108 00:21:44.055812 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.156413 kubelet[2165]: E1108 00:21:44.156356 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.257517 kubelet[2165]: E1108 00:21:44.256990 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.358795 kubelet[2165]: E1108 00:21:44.357163 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.457730 kubelet[2165]: E1108 00:21:44.457656 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.558246 kubelet[2165]: E1108 00:21:44.558193 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.658418 kubelet[2165]: E1108 00:21:44.658351 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.759187 kubelet[2165]: E1108 00:21:44.759121 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.860008 kubelet[2165]: E1108 00:21:44.859847 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:44.960815 kubelet[2165]: E1108 00:21:44.960744 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:45.061745 kubelet[2165]: E1108 00:21:45.061643 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:45.162435 kubelet[2165]: E1108 00:21:45.162286 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:45.224931 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-9.scope)... Nov 8 00:21:45.224949 systemd[1]: Reloading... Nov 8 00:21:45.262576 kubelet[2165]: E1108 00:21:45.262531 2165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:45.366863 zram_generator::config[2489]: No configuration found. Nov 8 00:21:45.429402 kubelet[2165]: I1108 00:21:45.429293 2165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:45.438268 kubelet[2165]: I1108 00:21:45.438229 2165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:45.441457 kubelet[2165]: I1108 00:21:45.441415 2165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:45.482894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:45.578463 systemd[1]: Reloading finished in 353 ms. Nov 8 00:21:45.622830 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:45.642158 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:21:45.642434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:45.658168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:45.832373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:45.837560 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:21:45.880221 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:45.880221 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:21:45.880221 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:45.880221 kubelet[2531]: I1108 00:21:45.880077 2531 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:21:45.886775 kubelet[2531]: I1108 00:21:45.886701 2531 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:21:45.886775 kubelet[2531]: I1108 00:21:45.886737 2531 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:21:45.887016 kubelet[2531]: I1108 00:21:45.886989 2531 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:21:45.888067 kubelet[2531]: I1108 00:21:45.888035 2531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:21:45.889954 kubelet[2531]: I1108 00:21:45.889914 2531 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:21:45.896732 kubelet[2531]: E1108 00:21:45.896681 2531 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:21:45.896732 kubelet[2531]: I1108 00:21:45.896714 2531 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:21:45.902237 kubelet[2531]: I1108 00:21:45.902189 2531 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:21:45.902515 kubelet[2531]: I1108 00:21:45.902469 2531 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:21:45.902699 kubelet[2531]: I1108 00:21:45.902501 2531 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:21:45.902851 kubelet[2531]: I1108 00:21:45.902706 2531 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:21:45.902851 kubelet[2531]: I1108 00:21:45.902719 2531 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:21:45.902851 kubelet[2531]: I1108 00:21:45.902796 2531 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:45.902996 kubelet[2531]: I1108 00:21:45.902973 2531 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:21:45.903035 kubelet[2531]: I1108 00:21:45.903004 2531 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:21:45.903035 kubelet[2531]: I1108 00:21:45.903025 2531 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:21:45.903102 kubelet[2531]: I1108 00:21:45.903037 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:21:45.906255 kubelet[2531]: I1108 00:21:45.905876 2531 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:21:45.906339 kubelet[2531]: I1108 00:21:45.906309 2531 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:21:45.906875 kubelet[2531]: I1108 00:21:45.906853 2531 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:21:45.906916 kubelet[2531]: I1108 00:21:45.906892 2531 server.go:1287] "Started kubelet" Nov 8 00:21:45.907342 kubelet[2531]: I1108 00:21:45.907275 2531 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:21:45.907564 kubelet[2531]: I1108 00:21:45.907547 2531 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:21:45.907606 kubelet[2531]: I1108 00:21:45.907593 2531 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:21:45.908036 kubelet[2531]: I1108 00:21:45.908018 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:21:45.908370 kubelet[2531]: I1108 00:21:45.908331 2531 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:21:45.909288 kubelet[2531]: I1108 00:21:45.909130 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:21:45.911607 kubelet[2531]: I1108 00:21:45.911575 2531 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:21:45.912369 kubelet[2531]: E1108 00:21:45.912340 2531 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:45.913205 kubelet[2531]: I1108 00:21:45.913166 2531 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:21:45.913609 kubelet[2531]: I1108 00:21:45.913428 2531 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:21:45.915034 kubelet[2531]: I1108 00:21:45.914895 2531 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:21:45.916776 kubelet[2531]: I1108 00:21:45.915250 2531 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:21:45.920597 kubelet[2531]: E1108 00:21:45.920561 2531 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:21:45.926297 kubelet[2531]: I1108 00:21:45.926265 2531 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:21:45.933398 kubelet[2531]: I1108 00:21:45.933237 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:21:45.934714 kubelet[2531]: I1108 00:21:45.934698 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:21:45.934821 kubelet[2531]: I1108 00:21:45.934803 2531 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:21:45.934882 kubelet[2531]: I1108 00:21:45.934831 2531 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:21:45.934882 kubelet[2531]: I1108 00:21:45.934839 2531 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:21:45.934953 kubelet[2531]: E1108 00:21:45.934887 2531 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:21:45.967409 kubelet[2531]: I1108 00:21:45.967346 2531 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:21:45.967409 kubelet[2531]: I1108 00:21:45.967386 2531 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:21:45.967568 kubelet[2531]: I1108 00:21:45.967440 2531 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:45.967645 kubelet[2531]: I1108 00:21:45.967623 2531 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:21:45.967704 kubelet[2531]: I1108 00:21:45.967637 2531 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:21:45.967704 kubelet[2531]: I1108 00:21:45.967659 2531 policy_none.go:49] "None policy: Start" Nov 8 00:21:45.967704 kubelet[2531]: I1108 00:21:45.967668 2531 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:21:45.967704 kubelet[2531]: I1108 00:21:45.967678 2531 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:21:45.967876 kubelet[2531]: I1108 00:21:45.967857 2531 state_mem.go:75] "Updated machine memory state" Nov 8 00:21:45.972301 kubelet[2531]: I1108 00:21:45.972263 2531 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:21:45.972487 kubelet[2531]: I1108 00:21:45.972466 2531 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:21:45.972550 kubelet[2531]: I1108 00:21:45.972484 2531 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:21:45.972919 kubelet[2531]: I1108 00:21:45.972746 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:21:45.974422 kubelet[2531]: E1108 00:21:45.973513 2531 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:21:46.035888 kubelet[2531]: I1108 00:21:46.035844 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:46.035888 kubelet[2531]: I1108 00:21:46.035850 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.037533 kubelet[2531]: I1108 00:21:46.036125 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.042252 kubelet[2531]: E1108 00:21:46.042213 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:46.042252 kubelet[2531]: E1108 00:21:46.042213 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.042436 kubelet[2531]: E1108 00:21:46.042397 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.078294 kubelet[2531]: I1108 00:21:46.078233 2531 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:46.083984 kubelet[2531]: I1108 00:21:46.083848 2531 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:21:46.083984 kubelet[2531]: I1108 00:21:46.083949 2531 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:21:46.114915 kubelet[2531]: I1108 00:21:46.114854 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.114915 kubelet[2531]: I1108 00:21:46.114906 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:46.115070 kubelet[2531]: I1108 00:21:46.114931 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a376c2280220d0a24341b66320a1a995-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a376c2280220d0a24341b66320a1a995\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.115070 kubelet[2531]: I1108 00:21:46.114962 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a376c2280220d0a24341b66320a1a995-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a376c2280220d0a24341b66320a1a995\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.115070 kubelet[2531]: I1108 00:21:46.114991 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a376c2280220d0a24341b66320a1a995-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a376c2280220d0a24341b66320a1a995\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.115070 kubelet[2531]: I1108 00:21:46.115014 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.115070 kubelet[2531]: I1108 00:21:46.115033 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.115189 kubelet[2531]: I1108 00:21:46.115056 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.115189 kubelet[2531]: I1108 00:21:46.115082 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.228062 sudo[2568]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 8 00:21:46.228452 sudo[2568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 8 00:21:46.343506 kubelet[2531]: E1108 00:21:46.343276 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:46.343506 kubelet[2531]: E1108 00:21:46.343306 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:46.343506 kubelet[2531]: E1108 00:21:46.343352 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:46.869808 sudo[2568]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:46.903619 kubelet[2531]: I1108 00:21:46.903565 2531 apiserver.go:52] "Watching apiserver" Nov 8 00:21:46.913913 kubelet[2531]: I1108 00:21:46.913871 2531 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:21:46.947976 kubelet[2531]: I1108 00:21:46.947924 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:46.948174 kubelet[2531]: I1108 00:21:46.948098 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.948336 kubelet[2531]: I1108 00:21:46.948316 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.954186 kubelet[2531]: E1108 00:21:46.954142 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:46.954910 kubelet[2531]: E1108 00:21:46.954359 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:46.956983 kubelet[2531]: E1108 00:21:46.956943 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:46.957234 kubelet[2531]: E1108 00:21:46.957101 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:46.957234 kubelet[2531]: E1108 00:21:46.956945 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:46.957697 kubelet[2531]: E1108 00:21:46.957616 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:46.977589 kubelet[2531]: I1108 00:21:46.977512 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.977494262 podStartE2EDuration="1.977494262s" podCreationTimestamp="2025-11-08 00:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:46.971040701 +0000 UTC m=+1.126729188" watchObservedRunningTime="2025-11-08 00:21:46.977494262 +0000 UTC m=+1.133182749" Nov 8 00:21:46.977772 kubelet[2531]: I1108 00:21:46.977630 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.977625398 podStartE2EDuration="1.977625398s" podCreationTimestamp="2025-11-08 00:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:46.97738849 +0000 UTC m=+1.133076977" watchObservedRunningTime="2025-11-08 00:21:46.977625398 +0000 UTC m=+1.133313885" Nov 8 00:21:46.984481 kubelet[2531]: I1108 00:21:46.984412 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.984390709 podStartE2EDuration="1.984390709s" podCreationTimestamp="2025-11-08 00:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:46.984243721 +0000 UTC m=+1.139932208" watchObservedRunningTime="2025-11-08 00:21:46.984390709 +0000 UTC m=+1.140079216" Nov 8 00:21:47.948620 kubelet[2531]: E1108 00:21:47.948574 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:47.949086 kubelet[2531]: E1108 00:21:47.948631 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:47.949086 kubelet[2531]: E1108 00:21:47.948871 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:48.453867 sudo[1664]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:48.455852 sshd[1658]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:48.459724 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:46388.service: Deactivated successfully. Nov 8 00:21:48.461621 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:21:48.461849 systemd[1]: session-9.scope: Consumed 5.383s CPU time, 160.0M memory peak, 0B memory swap peak. Nov 8 00:21:48.462322 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:21:48.463438 systemd-logind[1451]: Removed session 9. Nov 8 00:21:48.953327 kubelet[2531]: E1108 00:21:48.952532 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:49.397054 update_engine[1455]: I20251108 00:21:49.396922 1455 update_attempter.cc:509] Updating boot flags... Nov 8 00:21:49.428801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2616) Nov 8 00:21:49.478391 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2617) Nov 8 00:21:49.509824 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2617) Nov 8 00:21:50.169993 kubelet[2531]: E1108 00:21:50.169944 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:51.612367 kubelet[2531]: I1108 00:21:51.612322 2531 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:21:51.612834 containerd[1465]: time="2025-11-08T00:21:51.612774422Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:21:51.613095 kubelet[2531]: I1108 00:21:51.612997 2531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:21:52.303021 systemd[1]: Created slice kubepods-besteffort-pod3c05ef92_30ec_4542_b585_d8e849dc4de3.slice - libcontainer container kubepods-besteffort-pod3c05ef92_30ec_4542_b585_d8e849dc4de3.slice. Nov 8 00:21:52.317907 systemd[1]: Created slice kubepods-burstable-pod9fb481ab_ab2e_47e7_9282_afd10fac9545.slice - libcontainer container kubepods-burstable-pod9fb481ab_ab2e_47e7_9282_afd10fac9545.slice. Nov 8 00:21:52.352304 kubelet[2531]: I1108 00:21:52.352205 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwfm\" (UniqueName: \"kubernetes.io/projected/3c05ef92-30ec-4542-b585-d8e849dc4de3-kube-api-access-lpwfm\") pod \"kube-proxy-twh2s\" (UID: \"3c05ef92-30ec-4542-b585-d8e849dc4de3\") " pod="kube-system/kube-proxy-twh2s" Nov 8 00:21:52.352304 kubelet[2531]: I1108 00:21:52.352259 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb481ab-ab2e-47e7-9282-afd10fac9545-clustermesh-secrets\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352304 kubelet[2531]: I1108 00:21:52.352278 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-config-path\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352304 kubelet[2531]: I1108 00:21:52.352303 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-net\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352304 kubelet[2531]: I1108 00:21:52.352318 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-kernel\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352654 kubelet[2531]: I1108 00:21:52.352334 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c05ef92-30ec-4542-b585-d8e849dc4de3-kube-proxy\") pod \"kube-proxy-twh2s\" (UID: \"3c05ef92-30ec-4542-b585-d8e849dc4de3\") " pod="kube-system/kube-proxy-twh2s" Nov 8 00:21:52.352654 kubelet[2531]: I1108 00:21:52.352349 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-cgroup\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352654 kubelet[2531]: I1108 00:21:52.352366 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-etc-cni-netd\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352654 kubelet[2531]: I1108 00:21:52.352383 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-run\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352654 kubelet[2531]: I1108 00:21:52.352411 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-bpf-maps\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352654 kubelet[2531]: I1108 00:21:52.352438 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-lib-modules\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352906 kubelet[2531]: I1108 00:21:52.352453 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-hubble-tls\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352906 kubelet[2531]: I1108 00:21:52.352466 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-hostproc\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352906 kubelet[2531]: I1108 00:21:52.352480 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-xtables-lock\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352906 kubelet[2531]: I1108 00:21:52.352496 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmbg\" (UniqueName: \"kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-kube-api-access-ffmbg\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.352906 kubelet[2531]: I1108 00:21:52.352520 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c05ef92-30ec-4542-b585-d8e849dc4de3-lib-modules\") pod \"kube-proxy-twh2s\" (UID: \"3c05ef92-30ec-4542-b585-d8e849dc4de3\") " pod="kube-system/kube-proxy-twh2s" Nov 8 00:21:52.352906 kubelet[2531]: I1108 00:21:52.352554 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c05ef92-30ec-4542-b585-d8e849dc4de3-xtables-lock\") pod \"kube-proxy-twh2s\" (UID: \"3c05ef92-30ec-4542-b585-d8e849dc4de3\") " pod="kube-system/kube-proxy-twh2s" Nov 8 00:21:52.353120 kubelet[2531]: I1108 00:21:52.352570 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cni-path\") pod \"cilium-4r4nx\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " pod="kube-system/cilium-4r4nx" Nov 8 00:21:52.402252 kubelet[2531]: E1108 00:21:52.402182 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.597100 systemd[1]: Created slice kubepods-besteffort-pod20efe8c4_791b_4f39_8abf_12fa3cae44d2.slice - libcontainer container kubepods-besteffort-pod20efe8c4_791b_4f39_8abf_12fa3cae44d2.slice. Nov 8 00:21:52.615709 kubelet[2531]: E1108 00:21:52.615647 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.617460 containerd[1465]: time="2025-11-08T00:21:52.617307732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twh2s,Uid:3c05ef92-30ec-4542-b585-d8e849dc4de3,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:52.620714 kubelet[2531]: E1108 00:21:52.620654 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.621234 containerd[1465]: time="2025-11-08T00:21:52.621177576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4r4nx,Uid:9fb481ab-ab2e-47e7-9282-afd10fac9545,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:52.627098 kubelet[2531]: E1108 00:21:52.627050 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.653916 containerd[1465]: time="2025-11-08T00:21:52.653653388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:52.653916 containerd[1465]: time="2025-11-08T00:21:52.653857937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:52.653916 containerd[1465]: time="2025-11-08T00:21:52.653875574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:52.654291 containerd[1465]: time="2025-11-08T00:21:52.653970387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:52.655491 kubelet[2531]: I1108 00:21:52.655440 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwp2\" (UniqueName: \"kubernetes.io/projected/20efe8c4-791b-4f39-8abf-12fa3cae44d2-kube-api-access-tpwp2\") pod \"cilium-operator-6c4d7847fc-6924v\" (UID: \"20efe8c4-791b-4f39-8abf-12fa3cae44d2\") " pod="kube-system/cilium-operator-6c4d7847fc-6924v" Nov 8 00:21:52.655600 kubelet[2531]: I1108 00:21:52.655508 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20efe8c4-791b-4f39-8abf-12fa3cae44d2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6924v\" (UID: \"20efe8c4-791b-4f39-8abf-12fa3cae44d2\") " pod="kube-system/cilium-operator-6c4d7847fc-6924v" Nov 8 00:21:52.662992 containerd[1465]: time="2025-11-08T00:21:52.662353251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:52.662992 containerd[1465]: time="2025-11-08T00:21:52.662448524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:52.662992 containerd[1465]: time="2025-11-08T00:21:52.662462906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:52.662992 containerd[1465]: time="2025-11-08T00:21:52.662541285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:52.717168 systemd[1]: Started cri-containerd-45af8850bc094170c4b508a6afb313a2c7ce237ec38c16a0396d6eda514e14ae.scope - libcontainer container 45af8850bc094170c4b508a6afb313a2c7ce237ec38c16a0396d6eda514e14ae. Nov 8 00:21:52.740978 systemd[1]: Started cri-containerd-f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0.scope - libcontainer container f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0. Nov 8 00:21:52.767226 containerd[1465]: time="2025-11-08T00:21:52.767174689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twh2s,Uid:3c05ef92-30ec-4542-b585-d8e849dc4de3,Namespace:kube-system,Attempt:0,} returns sandbox id \"45af8850bc094170c4b508a6afb313a2c7ce237ec38c16a0396d6eda514e14ae\"" Nov 8 00:21:52.768702 kubelet[2531]: E1108 00:21:52.768405 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.773050 containerd[1465]: time="2025-11-08T00:21:52.772999986Z" level=info msg="CreateContainer within sandbox \"45af8850bc094170c4b508a6afb313a2c7ce237ec38c16a0396d6eda514e14ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:21:52.775011 containerd[1465]: time="2025-11-08T00:21:52.774955390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4r4nx,Uid:9fb481ab-ab2e-47e7-9282-afd10fac9545,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\"" Nov 8 00:21:52.775749 kubelet[2531]: E1108 00:21:52.775668 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.776671 containerd[1465]: time="2025-11-08T00:21:52.776636577Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 8 00:21:52.901719 kubelet[2531]: E1108 00:21:52.901584 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.902262 containerd[1465]: time="2025-11-08T00:21:52.902183331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6924v,Uid:20efe8c4-791b-4f39-8abf-12fa3cae44d2,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:52.961214 kubelet[2531]: E1108 00:21:52.961155 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:52.961495 kubelet[2531]: E1108 00:21:52.961470 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:53.943178 containerd[1465]: time="2025-11-08T00:21:53.943101653Z" level=info msg="CreateContainer within sandbox \"45af8850bc094170c4b508a6afb313a2c7ce237ec38c16a0396d6eda514e14ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4371ad6a83184fc70b415558453c2c3b845d54d99ec903be180e4bd678e968e7\"" Nov 8 00:21:53.946970 containerd[1465]: time="2025-11-08T00:21:53.944792844Z" level=info msg="StartContainer for \"4371ad6a83184fc70b415558453c2c3b845d54d99ec903be180e4bd678e968e7\"" Nov 8 00:21:53.964482 kubelet[2531]: E1108 00:21:53.964295 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:54.021198 systemd[1]: Started cri-containerd-4371ad6a83184fc70b415558453c2c3b845d54d99ec903be180e4bd678e968e7.scope - libcontainer container 4371ad6a83184fc70b415558453c2c3b845d54d99ec903be180e4bd678e968e7. Nov 8 00:21:54.460223 containerd[1465]: time="2025-11-08T00:21:54.460160047Z" level=info msg="StartContainer for \"4371ad6a83184fc70b415558453c2c3b845d54d99ec903be180e4bd678e968e7\" returns successfully" Nov 8 00:21:54.972697 kubelet[2531]: E1108 00:21:54.972136 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:55.162515 containerd[1465]: time="2025-11-08T00:21:55.160260611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:55.162515 containerd[1465]: time="2025-11-08T00:21:55.160357464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:55.162515 containerd[1465]: time="2025-11-08T00:21:55.160376605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:55.162515 containerd[1465]: time="2025-11-08T00:21:55.160571041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:55.222044 systemd[1]: Started cri-containerd-b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5.scope - libcontainer container b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5. Nov 8 00:21:55.319575 containerd[1465]: time="2025-11-08T00:21:55.319497522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6924v,Uid:20efe8c4-791b-4f39-8abf-12fa3cae44d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5\"" Nov 8 00:21:55.320510 kubelet[2531]: E1108 00:21:55.320475 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:55.974701 kubelet[2531]: E1108 00:21:55.974656 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:00.175615 kubelet[2531]: E1108 00:22:00.175562 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:00.237296 kubelet[2531]: I1108 00:22:00.237215 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-twh2s" podStartSLOduration=8.237199263 podStartE2EDuration="8.237199263s" podCreationTimestamp="2025-11-08 00:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:55.424939047 +0000 UTC m=+9.580627534" watchObservedRunningTime="2025-11-08 00:22:00.237199263 +0000 UTC m=+14.392887750" Nov 8 00:22:00.982618 kubelet[2531]: E1108 00:22:00.982573 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:07.290795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823960205.mount: Deactivated successfully. Nov 8 00:22:10.066903 containerd[1465]: time="2025-11-08T00:22:10.066835346Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:10.069071 containerd[1465]: time="2025-11-08T00:22:10.068972014Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 8 00:22:10.071038 containerd[1465]: time="2025-11-08T00:22:10.070873532Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:10.073867 containerd[1465]: time="2025-11-08T00:22:10.073803899Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.297119034s" Nov 8 00:22:10.073999 containerd[1465]: time="2025-11-08T00:22:10.073872037Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 8 00:22:10.083624 containerd[1465]: time="2025-11-08T00:22:10.083579099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 8 00:22:10.096143 containerd[1465]: time="2025-11-08T00:22:10.096095039Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:22:10.110486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882480043.mount: Deactivated successfully. Nov 8 00:22:10.115378 containerd[1465]: time="2025-11-08T00:22:10.115308435Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\"" Nov 8 00:22:10.116806 containerd[1465]: time="2025-11-08T00:22:10.115991245Z" level=info msg="StartContainer for \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\"" Nov 8 00:22:10.153896 systemd[1]: Started cri-containerd-afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774.scope - libcontainer container afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774. Nov 8 00:22:10.200110 systemd[1]: cri-containerd-afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774.scope: Deactivated successfully. Nov 8 00:22:10.317863 containerd[1465]: time="2025-11-08T00:22:10.317693221Z" level=info msg="StartContainer for \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\" returns successfully" Nov 8 00:22:10.685282 containerd[1465]: time="2025-11-08T00:22:10.685083089Z" level=info msg="shim disconnected" id=afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774 namespace=k8s.io Nov 8 00:22:10.685282 containerd[1465]: time="2025-11-08T00:22:10.685190120Z" level=warning msg="cleaning up after shim disconnected" id=afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774 namespace=k8s.io Nov 8 00:22:10.685282 containerd[1465]: time="2025-11-08T00:22:10.685202583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:22:11.003017 kubelet[2531]: E1108 00:22:11.002664 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:11.005817 containerd[1465]: time="2025-11-08T00:22:11.005742928Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:22:11.026934 containerd[1465]: time="2025-11-08T00:22:11.026873339Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\"" Nov 8 00:22:11.027424 containerd[1465]: time="2025-11-08T00:22:11.027389989Z" level=info msg="StartContainer for \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\"" Nov 8 00:22:11.056928 systemd[1]: Started cri-containerd-f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2.scope - libcontainer container f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2. Nov 8 00:22:11.087111 containerd[1465]: time="2025-11-08T00:22:11.087055874Z" level=info msg="StartContainer for \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\" returns successfully" Nov 8 00:22:11.098245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:22:11.098830 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:22:11.098905 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:22:11.106145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:22:11.109027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774-rootfs.mount: Deactivated successfully. Nov 8 00:22:11.110153 systemd[1]: cri-containerd-f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2.scope: Deactivated successfully. Nov 8 00:22:11.123975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2-rootfs.mount: Deactivated successfully. Nov 8 00:22:11.131240 containerd[1465]: time="2025-11-08T00:22:11.131191505Z" level=info msg="shim disconnected" id=f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2 namespace=k8s.io Nov 8 00:22:11.131240 containerd[1465]: time="2025-11-08T00:22:11.131237000Z" level=warning msg="cleaning up after shim disconnected" id=f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2 namespace=k8s.io Nov 8 00:22:11.131240 containerd[1465]: time="2025-11-08T00:22:11.131246297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:22:11.133001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:22:11.629785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384865063.mount: Deactivated successfully. Nov 8 00:22:11.918205 containerd[1465]: time="2025-11-08T00:22:11.918067622Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:11.918845 containerd[1465]: time="2025-11-08T00:22:11.918798232Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 8 00:22:11.920050 containerd[1465]: time="2025-11-08T00:22:11.920007461Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:11.921598 containerd[1465]: time="2025-11-08T00:22:11.921547300Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.837927936s" Nov 8 00:22:11.921598 containerd[1465]: time="2025-11-08T00:22:11.921583087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 8 00:22:11.923581 containerd[1465]: time="2025-11-08T00:22:11.923538285Z" level=info msg="CreateContainer within sandbox \"b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 8 00:22:11.936558 containerd[1465]: time="2025-11-08T00:22:11.936517604Z" level=info msg="CreateContainer within sandbox \"b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\"" Nov 8 00:22:11.938069 containerd[1465]: time="2025-11-08T00:22:11.937147767Z" level=info msg="StartContainer for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\"" Nov 8 00:22:11.971899 systemd[1]: Started cri-containerd-a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f.scope - libcontainer container a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f. Nov 8 00:22:12.005933 containerd[1465]: time="2025-11-08T00:22:12.005876414Z" level=info msg="StartContainer for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" returns successfully" Nov 8 00:22:12.010859 kubelet[2531]: E1108 00:22:12.010830 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:12.013827 containerd[1465]: time="2025-11-08T00:22:12.013776174Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:22:12.017871 kubelet[2531]: E1108 00:22:12.017836 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:12.045740 containerd[1465]: time="2025-11-08T00:22:12.045497042Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\"" Nov 8 00:22:12.047171 containerd[1465]: time="2025-11-08T00:22:12.047101433Z" level=info msg="StartContainer for \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\"" Nov 8 00:22:12.050191 kubelet[2531]: I1108 00:22:12.050126 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6924v" podStartSLOduration=3.449045796 podStartE2EDuration="20.050086843s" podCreationTimestamp="2025-11-08 00:21:52 +0000 UTC" firstStartedPulling="2025-11-08 00:21:55.321179426 +0000 UTC m=+9.476867913" lastFinishedPulling="2025-11-08 00:22:11.922220473 +0000 UTC m=+26.077908960" observedRunningTime="2025-11-08 00:22:12.049744171 +0000 UTC m=+26.205432658" watchObservedRunningTime="2025-11-08 00:22:12.050086843 +0000 UTC m=+26.205775320" Nov 8 00:22:12.089939 systemd[1]: Started cri-containerd-3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898.scope - libcontainer container 3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898. Nov 8 00:22:12.145373 containerd[1465]: time="2025-11-08T00:22:12.145321514Z" level=info msg="StartContainer for \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\" returns successfully" Nov 8 00:22:12.150035 systemd[1]: cri-containerd-3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898.scope: Deactivated successfully. Nov 8 00:22:12.182857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898-rootfs.mount: Deactivated successfully. Nov 8 00:22:12.402449 containerd[1465]: time="2025-11-08T00:22:12.402376061Z" level=info msg="shim disconnected" id=3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898 namespace=k8s.io Nov 8 00:22:12.402449 containerd[1465]: time="2025-11-08T00:22:12.402437206Z" level=warning msg="cleaning up after shim disconnected" id=3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898 namespace=k8s.io Nov 8 00:22:12.402449 containerd[1465]: time="2025-11-08T00:22:12.402447595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:22:13.031493 kubelet[2531]: E1108 00:22:13.031438 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:13.031493 kubelet[2531]: E1108 00:22:13.031503 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:13.045488 containerd[1465]: time="2025-11-08T00:22:13.045445843Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:22:13.348191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977311904.mount: Deactivated successfully. Nov 8 00:22:13.352295 containerd[1465]: time="2025-11-08T00:22:13.352236635Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\"" Nov 8 00:22:13.353551 containerd[1465]: time="2025-11-08T00:22:13.353514634Z" level=info msg="StartContainer for \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\"" Nov 8 00:22:13.411891 systemd[1]: Started cri-containerd-2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1.scope - libcontainer container 2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1. Nov 8 00:22:13.456027 containerd[1465]: time="2025-11-08T00:22:13.455966350Z" level=info msg="StartContainer for \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\" returns successfully" Nov 8 00:22:13.470850 systemd[1]: cri-containerd-2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1.scope: Deactivated successfully. Nov 8 00:22:13.529965 containerd[1465]: time="2025-11-08T00:22:13.529811803Z" level=info msg="shim disconnected" id=2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1 namespace=k8s.io Nov 8 00:22:13.529965 containerd[1465]: time="2025-11-08T00:22:13.529951075Z" level=warning msg="cleaning up after shim disconnected" id=2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1 namespace=k8s.io Nov 8 00:22:13.530253 containerd[1465]: time="2025-11-08T00:22:13.529961444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:22:13.957189 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:45744.service - OpenSSH per-connection server daemon (10.0.0.1:45744). Nov 8 00:22:14.020776 sshd[3234]: Accepted publickey for core from 10.0.0.1 port 45744 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:14.022901 sshd[3234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.027308 systemd-logind[1451]: New session 10 of user core. Nov 8 00:22:14.035096 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:22:14.038834 kubelet[2531]: E1108 00:22:14.038799 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:14.041546 containerd[1465]: time="2025-11-08T00:22:14.041506491Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:22:14.200689 containerd[1465]: time="2025-11-08T00:22:14.200626613Z" level=info msg="CreateContainer within sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\"" Nov 8 00:22:14.204175 containerd[1465]: time="2025-11-08T00:22:14.204011933Z" level=info msg="StartContainer for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\"" Nov 8 00:22:14.205104 sshd[3234]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.209899 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:45744.service: Deactivated successfully. Nov 8 00:22:14.212525 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:22:14.213449 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:22:14.214491 systemd-logind[1451]: Removed session 10. Nov 8 00:22:14.242028 systemd[1]: Started cri-containerd-325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea.scope - libcontainer container 325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea. Nov 8 00:22:14.277328 containerd[1465]: time="2025-11-08T00:22:14.277228925Z" level=info msg="StartContainer for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" returns successfully" Nov 8 00:22:14.345394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1-rootfs.mount: Deactivated successfully. Nov 8 00:22:14.438455 kubelet[2531]: I1108 00:22:14.438413 2531 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:22:14.477639 systemd[1]: Created slice kubepods-burstable-pod89c1c9b0_478b_4f99_a8a0_d78508d72999.slice - libcontainer container kubepods-burstable-pod89c1c9b0_478b_4f99_a8a0_d78508d72999.slice. Nov 8 00:22:14.489618 systemd[1]: Created slice kubepods-burstable-pod46a74e4f_42e2_49a3_9079_40f00e804b19.slice - libcontainer container kubepods-burstable-pod46a74e4f_42e2_49a3_9079_40f00e804b19.slice. Nov 8 00:22:14.603844 kubelet[2531]: I1108 00:22:14.603773 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btv6\" (UniqueName: \"kubernetes.io/projected/89c1c9b0-478b-4f99-a8a0-d78508d72999-kube-api-access-9btv6\") pod \"coredns-668d6bf9bc-gqpb4\" (UID: \"89c1c9b0-478b-4f99-a8a0-d78508d72999\") " pod="kube-system/coredns-668d6bf9bc-gqpb4" Nov 8 00:22:14.603844 kubelet[2531]: I1108 00:22:14.603832 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89c1c9b0-478b-4f99-a8a0-d78508d72999-config-volume\") pod \"coredns-668d6bf9bc-gqpb4\" (UID: \"89c1c9b0-478b-4f99-a8a0-d78508d72999\") " pod="kube-system/coredns-668d6bf9bc-gqpb4" Nov 8 00:22:14.603844 kubelet[2531]: I1108 00:22:14.603855 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdfz\" (UniqueName: \"kubernetes.io/projected/46a74e4f-42e2-49a3-9079-40f00e804b19-kube-api-access-5jdfz\") pod \"coredns-668d6bf9bc-cbvjr\" (UID: \"46a74e4f-42e2-49a3-9079-40f00e804b19\") " pod="kube-system/coredns-668d6bf9bc-cbvjr" Nov 8 00:22:14.604117 kubelet[2531]: I1108 00:22:14.603876 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a74e4f-42e2-49a3-9079-40f00e804b19-config-volume\") pod \"coredns-668d6bf9bc-cbvjr\" (UID: \"46a74e4f-42e2-49a3-9079-40f00e804b19\") " pod="kube-system/coredns-668d6bf9bc-cbvjr" Nov 8 00:22:14.785705 kubelet[2531]: E1108 00:22:14.785652 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:14.786531 containerd[1465]: time="2025-11-08T00:22:14.786484164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqpb4,Uid:89c1c9b0-478b-4f99-a8a0-d78508d72999,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:14.794048 kubelet[2531]: E1108 00:22:14.794015 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:14.795428 containerd[1465]: time="2025-11-08T00:22:14.795386655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbvjr,Uid:46a74e4f-42e2-49a3-9079-40f00e804b19,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:15.042963 kubelet[2531]: E1108 00:22:15.042836 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:16.044394 kubelet[2531]: E1108 00:22:16.044347 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:16.529620 systemd-networkd[1395]: cilium_host: Link UP Nov 8 00:22:16.529856 systemd-networkd[1395]: cilium_net: Link UP Nov 8 00:22:16.529861 systemd-networkd[1395]: cilium_net: Gained carrier Nov 8 00:22:16.530072 systemd-networkd[1395]: cilium_host: Gained carrier Nov 8 00:22:16.532961 systemd-networkd[1395]: cilium_host: Gained IPv6LL Nov 8 00:22:16.658043 systemd-networkd[1395]: cilium_vxlan: Link UP Nov 8 00:22:16.658057 systemd-networkd[1395]: cilium_vxlan: Gained carrier Nov 8 00:22:16.904822 kernel: NET: Registered PF_ALG protocol family Nov 8 00:22:17.046823 kubelet[2531]: E1108 00:22:17.046750 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:17.323979 systemd-networkd[1395]: cilium_net: Gained IPv6LL Nov 8 00:22:17.666433 systemd-networkd[1395]: lxc_health: Link UP Nov 8 00:22:17.679402 systemd-networkd[1395]: lxc_health: Gained carrier Nov 8 00:22:17.879611 systemd-networkd[1395]: lxcfe3941f8f411: Link UP Nov 8 00:22:17.889833 kernel: eth0: renamed from tmp398f3 Nov 8 00:22:17.893745 systemd-networkd[1395]: lxcfe3941f8f411: Gained carrier Nov 8 00:22:17.907383 systemd-networkd[1395]: lxc8b1cbad498e8: Link UP Nov 8 00:22:17.913963 kernel: eth0: renamed from tmp8d288 Nov 8 00:22:17.919890 systemd-networkd[1395]: lxc8b1cbad498e8: Gained carrier Nov 8 00:22:18.092149 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Nov 8 00:22:18.622526 kubelet[2531]: E1108 00:22:18.622480 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:18.644889 kubelet[2531]: I1108 00:22:18.644794 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4r4nx" podStartSLOduration=9.337626924 podStartE2EDuration="26.644751033s" podCreationTimestamp="2025-11-08 00:21:52 +0000 UTC" firstStartedPulling="2025-11-08 00:21:52.77623625 +0000 UTC m=+6.931924737" lastFinishedPulling="2025-11-08 00:22:10.083360329 +0000 UTC m=+24.239048846" observedRunningTime="2025-11-08 00:22:15.057958936 +0000 UTC m=+29.213647433" watchObservedRunningTime="2025-11-08 00:22:18.644751033 +0000 UTC m=+32.800439520" Nov 8 00:22:19.050395 kubelet[2531]: E1108 00:22:19.050342 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:19.054883 systemd-networkd[1395]: lxc_health: Gained IPv6LL Nov 8 00:22:19.217613 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:45754.service - OpenSSH per-connection server daemon (10.0.0.1:45754). Nov 8 00:22:19.244132 systemd-networkd[1395]: lxc8b1cbad498e8: Gained IPv6LL Nov 8 00:22:19.265286 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 45754 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:19.267225 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:19.272529 systemd-logind[1451]: New session 11 of user core. Nov 8 00:22:19.279885 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:22:19.308037 systemd-networkd[1395]: lxcfe3941f8f411: Gained IPv6LL Nov 8 00:22:19.417390 sshd[3769]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:19.422207 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:45754.service: Deactivated successfully. Nov 8 00:22:19.424450 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:22:19.425179 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:22:19.426482 systemd-logind[1451]: Removed session 11. Nov 8 00:22:20.051813 kubelet[2531]: E1108 00:22:20.051745 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:21.553246 containerd[1465]: time="2025-11-08T00:22:21.552990570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:21.555618 containerd[1465]: time="2025-11-08T00:22:21.553836307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:21.555618 containerd[1465]: time="2025-11-08T00:22:21.553883976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:21.555618 containerd[1465]: time="2025-11-08T00:22:21.554063894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:21.563173 containerd[1465]: time="2025-11-08T00:22:21.562884099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:21.563173 containerd[1465]: time="2025-11-08T00:22:21.562956655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:21.563173 containerd[1465]: time="2025-11-08T00:22:21.562978967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:21.563410 containerd[1465]: time="2025-11-08T00:22:21.563297665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:21.582932 systemd[1]: Started cri-containerd-398f31823d5ad976d5edefa12f6cfb3c00ac2f09c76443cc030534e6034a9a2c.scope - libcontainer container 398f31823d5ad976d5edefa12f6cfb3c00ac2f09c76443cc030534e6034a9a2c. Nov 8 00:22:21.589192 systemd[1]: Started cri-containerd-8d288b8fbefbce0ee1848893bb8a6836c75701921d85843e08549af35d6f195a.scope - libcontainer container 8d288b8fbefbce0ee1848893bb8a6836c75701921d85843e08549af35d6f195a. Nov 8 00:22:21.599061 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:22:21.604535 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:22:21.632334 containerd[1465]: time="2025-11-08T00:22:21.632271886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbvjr,Uid:46a74e4f-42e2-49a3-9079-40f00e804b19,Namespace:kube-system,Attempt:0,} returns sandbox id \"398f31823d5ad976d5edefa12f6cfb3c00ac2f09c76443cc030534e6034a9a2c\"" Nov 8 00:22:21.633245 kubelet[2531]: E1108 00:22:21.633191 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:21.636770 containerd[1465]: time="2025-11-08T00:22:21.636712566Z" level=info msg="CreateContainer within sandbox \"398f31823d5ad976d5edefa12f6cfb3c00ac2f09c76443cc030534e6034a9a2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:22:21.644293 containerd[1465]: time="2025-11-08T00:22:21.644232983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqpb4,Uid:89c1c9b0-478b-4f99-a8a0-d78508d72999,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d288b8fbefbce0ee1848893bb8a6836c75701921d85843e08549af35d6f195a\"" Nov 8 00:22:21.645190 kubelet[2531]: E1108 00:22:21.645157 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:21.649354 containerd[1465]: time="2025-11-08T00:22:21.649313442Z" level=info msg="CreateContainer within sandbox \"8d288b8fbefbce0ee1848893bb8a6836c75701921d85843e08549af35d6f195a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:22:21.670151 containerd[1465]: time="2025-11-08T00:22:21.670089363Z" level=info msg="CreateContainer within sandbox \"398f31823d5ad976d5edefa12f6cfb3c00ac2f09c76443cc030534e6034a9a2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63ca81b3a8c481014e3dc7c44e87860f8fc2f8e18ac68bc87806cdff18ca4052\"" Nov 8 00:22:21.671892 containerd[1465]: time="2025-11-08T00:22:21.670805967Z" level=info msg="StartContainer for \"63ca81b3a8c481014e3dc7c44e87860f8fc2f8e18ac68bc87806cdff18ca4052\"" Nov 8 00:22:21.675711 containerd[1465]: time="2025-11-08T00:22:21.675655744Z" level=info msg="CreateContainer within sandbox \"8d288b8fbefbce0ee1848893bb8a6836c75701921d85843e08549af35d6f195a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b02a066dd959d30feb50ada19255cc43509c2a5ee6544edbf4661f9aefdeafef\"" Nov 8 00:22:21.676348 containerd[1465]: time="2025-11-08T00:22:21.676308289Z" level=info msg="StartContainer for \"b02a066dd959d30feb50ada19255cc43509c2a5ee6544edbf4661f9aefdeafef\"" Nov 8 00:22:21.706221 systemd[1]: Started cri-containerd-63ca81b3a8c481014e3dc7c44e87860f8fc2f8e18ac68bc87806cdff18ca4052.scope - libcontainer container 63ca81b3a8c481014e3dc7c44e87860f8fc2f8e18ac68bc87806cdff18ca4052. Nov 8 00:22:21.710044 systemd[1]: Started cri-containerd-b02a066dd959d30feb50ada19255cc43509c2a5ee6544edbf4661f9aefdeafef.scope - libcontainer container b02a066dd959d30feb50ada19255cc43509c2a5ee6544edbf4661f9aefdeafef. Nov 8 00:22:21.740323 containerd[1465]: time="2025-11-08T00:22:21.740272772Z" level=info msg="StartContainer for \"63ca81b3a8c481014e3dc7c44e87860f8fc2f8e18ac68bc87806cdff18ca4052\" returns successfully" Nov 8 00:22:21.748307 containerd[1465]: time="2025-11-08T00:22:21.748255176Z" level=info msg="StartContainer for \"b02a066dd959d30feb50ada19255cc43509c2a5ee6544edbf4661f9aefdeafef\" returns successfully" Nov 8 00:22:22.069779 kubelet[2531]: E1108 00:22:22.069720 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:22.072529 kubelet[2531]: E1108 00:22:22.072477 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:22.085732 kubelet[2531]: I1108 00:22:22.085663 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cbvjr" podStartSLOduration=30.085646448 podStartE2EDuration="30.085646448s" podCreationTimestamp="2025-11-08 00:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:22.084365125 +0000 UTC m=+36.240053622" watchObservedRunningTime="2025-11-08 00:22:22.085646448 +0000 UTC m=+36.241334925" Nov 8 00:22:22.095802 kubelet[2531]: I1108 00:22:22.095674 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gqpb4" podStartSLOduration=30.095656836 podStartE2EDuration="30.095656836s" podCreationTimestamp="2025-11-08 00:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:22.094964857 +0000 UTC m=+36.250653334" watchObservedRunningTime="2025-11-08 00:22:22.095656836 +0000 UTC m=+36.251345323" Nov 8 00:22:23.074876 kubelet[2531]: E1108 00:22:23.074835 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:23.075459 kubelet[2531]: E1108 00:22:23.075020 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:24.076963 kubelet[2531]: E1108 00:22:24.076927 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:24.077443 kubelet[2531]: E1108 00:22:24.077087 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:24.433004 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Nov 8 00:22:24.474553 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:24.476634 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:24.482957 systemd-logind[1451]: New session 12 of user core. Nov 8 00:22:24.492086 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:22:24.656057 sshd[3962]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:24.660906 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:22:24.661367 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:57442.service: Deactivated successfully. Nov 8 00:22:24.663565 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:22:24.665130 systemd-logind[1451]: Removed session 12. Nov 8 00:22:29.668373 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:57452.service - OpenSSH per-connection server daemon (10.0.0.1:57452). Nov 8 00:22:29.705665 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 57452 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:29.707460 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:29.711828 systemd-logind[1451]: New session 13 of user core. Nov 8 00:22:29.722890 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:22:29.841541 sshd[3978]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:29.845815 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:57452.service: Deactivated successfully. Nov 8 00:22:29.847931 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:22:29.848796 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:22:29.849790 systemd-logind[1451]: Removed session 13. Nov 8 00:22:34.854720 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:46094.service - OpenSSH per-connection server daemon (10.0.0.1:46094). Nov 8 00:22:34.890616 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 46094 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:34.892291 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:34.896910 systemd-logind[1451]: New session 14 of user core. Nov 8 00:22:34.910925 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:22:35.033746 sshd[3993]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:35.042827 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:46094.service: Deactivated successfully. Nov 8 00:22:35.045078 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:22:35.046933 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:22:35.055055 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:46102.service - OpenSSH per-connection server daemon (10.0.0.1:46102). Nov 8 00:22:35.056102 systemd-logind[1451]: Removed session 14. Nov 8 00:22:35.090829 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 46102 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:35.092604 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:35.096851 systemd-logind[1451]: New session 15 of user core. Nov 8 00:22:35.108891 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:22:35.282208 sshd[4009]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:35.294604 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:46102.service: Deactivated successfully. Nov 8 00:22:35.303316 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:22:35.307036 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:22:35.319066 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). Nov 8 00:22:35.321661 systemd-logind[1451]: Removed session 15. Nov 8 00:22:35.365951 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:35.368015 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:35.372713 systemd-logind[1451]: New session 16 of user core. Nov 8 00:22:35.379926 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:22:35.494101 sshd[4022]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:35.498601 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:46112.service: Deactivated successfully. Nov 8 00:22:35.500994 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:22:35.501698 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:22:35.502626 systemd-logind[1451]: Removed session 16. Nov 8 00:22:40.505965 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:46120.service - OpenSSH per-connection server daemon (10.0.0.1:46120). Nov 8 00:22:40.542369 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 46120 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:40.544365 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:40.548684 systemd-logind[1451]: New session 17 of user core. Nov 8 00:22:40.558931 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:22:40.669980 sshd[4039]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:40.675277 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:46120.service: Deactivated successfully. Nov 8 00:22:40.677749 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:22:40.678518 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:22:40.679709 systemd-logind[1451]: Removed session 17. Nov 8 00:22:45.685430 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:51186.service - OpenSSH per-connection server daemon (10.0.0.1:51186). Nov 8 00:22:45.723519 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 51186 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:45.725204 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:45.729019 systemd-logind[1451]: New session 18 of user core. Nov 8 00:22:45.738892 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:22:45.861005 sshd[4054]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:45.877445 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:51186.service: Deactivated successfully. Nov 8 00:22:45.879863 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:22:45.881437 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:22:45.892063 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:51190.service - OpenSSH per-connection server daemon (10.0.0.1:51190). Nov 8 00:22:45.893014 systemd-logind[1451]: Removed session 18. Nov 8 00:22:45.931297 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 51190 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:45.933453 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:45.938380 systemd-logind[1451]: New session 19 of user core. Nov 8 00:22:45.944334 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:22:46.230012 sshd[4069]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:46.238982 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:51190.service: Deactivated successfully. Nov 8 00:22:46.241013 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:22:46.242812 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:22:46.251321 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:51196.service - OpenSSH per-connection server daemon (10.0.0.1:51196). Nov 8 00:22:46.252501 systemd-logind[1451]: Removed session 19. Nov 8 00:22:46.281256 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 51196 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:46.282921 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:46.287035 systemd-logind[1451]: New session 20 of user core. Nov 8 00:22:46.296875 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:22:47.254257 sshd[4083]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:47.267661 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:51196.service: Deactivated successfully. Nov 8 00:22:47.269810 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:22:47.271930 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:22:47.283278 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:51198.service - OpenSSH per-connection server daemon (10.0.0.1:51198). Nov 8 00:22:47.284449 systemd-logind[1451]: Removed session 20. Nov 8 00:22:47.327059 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 51198 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:47.329459 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:47.335250 systemd-logind[1451]: New session 21 of user core. Nov 8 00:22:47.343936 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:22:47.601377 sshd[4107]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:47.614280 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:51198.service: Deactivated successfully. Nov 8 00:22:47.616403 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:22:47.617165 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:22:47.630372 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:51202.service - OpenSSH per-connection server daemon (10.0.0.1:51202). Nov 8 00:22:47.631433 systemd-logind[1451]: Removed session 21. Nov 8 00:22:47.660092 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 51202 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:47.661897 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:47.666036 systemd-logind[1451]: New session 22 of user core. Nov 8 00:22:47.680901 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:22:47.798397 sshd[4120]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:47.802461 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:51202.service: Deactivated successfully. Nov 8 00:22:47.804514 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:22:47.805211 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:22:47.806482 systemd-logind[1451]: Removed session 22. Nov 8 00:22:52.810120 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:51210.service - OpenSSH per-connection server daemon (10.0.0.1:51210). Nov 8 00:22:52.846638 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 51210 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:52.848665 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:52.852844 systemd-logind[1451]: New session 23 of user core. Nov 8 00:22:52.865914 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:22:52.979558 sshd[4137]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:52.985113 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:51210.service: Deactivated successfully. Nov 8 00:22:52.987511 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:22:52.988351 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:22:52.989664 systemd-logind[1451]: Removed session 23. Nov 8 00:22:57.992780 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:59914.service - OpenSSH per-connection server daemon (10.0.0.1:59914). Nov 8 00:22:58.031457 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 59914 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:58.033874 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:58.039232 systemd-logind[1451]: New session 24 of user core. Nov 8 00:22:58.055002 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:22:58.182780 sshd[4153]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:58.187303 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:59914.service: Deactivated successfully. Nov 8 00:22:58.190348 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:22:58.192499 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:22:58.193730 systemd-logind[1451]: Removed session 24. Nov 8 00:23:03.198980 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:33708.service - OpenSSH per-connection server daemon (10.0.0.1:33708). Nov 8 00:23:03.233236 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 33708 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:23:03.234809 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:03.238959 systemd-logind[1451]: New session 25 of user core. Nov 8 00:23:03.248883 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:23:03.491264 sshd[4167]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:03.495560 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:33708.service: Deactivated successfully. Nov 8 00:23:03.497690 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:23:03.498467 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:23:03.499518 systemd-logind[1451]: Removed session 25. Nov 8 00:23:08.502998 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:33720.service - OpenSSH per-connection server daemon (10.0.0.1:33720). Nov 8 00:23:08.539185 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 33720 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:23:08.540911 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:08.544974 systemd-logind[1451]: New session 26 of user core. Nov 8 00:23:08.552908 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:23:08.664511 sshd[4182]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:08.679071 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:33720.service: Deactivated successfully. Nov 8 00:23:08.681392 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:23:08.683180 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:23:08.690249 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:33728.service - OpenSSH per-connection server daemon (10.0.0.1:33728). Nov 8 00:23:08.691986 systemd-logind[1451]: Removed session 26. Nov 8 00:23:08.722818 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 33728 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:23:08.724872 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:08.729328 systemd-logind[1451]: New session 27 of user core. Nov 8 00:23:08.739938 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:23:10.158086 containerd[1465]: time="2025-11-08T00:23:10.157948741Z" level=info msg="StopContainer for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" with timeout 30 (s)" Nov 8 00:23:10.158970 containerd[1465]: time="2025-11-08T00:23:10.158906867Z" level=info msg="Stop container \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" with signal terminated" Nov 8 00:23:10.173188 systemd[1]: cri-containerd-a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f.scope: Deactivated successfully. Nov 8 00:23:10.203578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f-rootfs.mount: Deactivated successfully. Nov 8 00:23:10.209489 containerd[1465]: time="2025-11-08T00:23:10.209420855Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:23:10.209849 containerd[1465]: time="2025-11-08T00:23:10.209553474Z" level=info msg="StopContainer for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" with timeout 2 (s)" Nov 8 00:23:10.209980 containerd[1465]: time="2025-11-08T00:23:10.209891367Z" level=info msg="Stop container \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" with signal terminated" Nov 8 00:23:10.219012 containerd[1465]: time="2025-11-08T00:23:10.218927332Z" level=info msg="shim disconnected" id=a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f namespace=k8s.io Nov 8 00:23:10.219012 containerd[1465]: time="2025-11-08T00:23:10.218998876Z" level=warning msg="cleaning up after shim disconnected" id=a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f namespace=k8s.io Nov 8 00:23:10.219012 containerd[1465]: time="2025-11-08T00:23:10.219011329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:10.219718 systemd-networkd[1395]: lxc_health: Link DOWN Nov 8 00:23:10.219728 systemd-networkd[1395]: lxc_health: Lost carrier Nov 8 00:23:10.242964 containerd[1465]: time="2025-11-08T00:23:10.242910316Z" level=info msg="StopContainer for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" returns successfully" Nov 8 00:23:10.244383 systemd[1]: cri-containerd-325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea.scope: Deactivated successfully. Nov 8 00:23:10.244733 systemd[1]: cri-containerd-325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea.scope: Consumed 7.337s CPU time. Nov 8 00:23:10.247710 containerd[1465]: time="2025-11-08T00:23:10.247661425Z" level=info msg="StopPodSandbox for \"b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5\"" Nov 8 00:23:10.247819 containerd[1465]: time="2025-11-08T00:23:10.247737949Z" level=info msg="Container to stop \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:23:10.250221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5-shm.mount: Deactivated successfully. Nov 8 00:23:10.260228 systemd[1]: cri-containerd-b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5.scope: Deactivated successfully. Nov 8 00:23:10.274502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea-rootfs.mount: Deactivated successfully. Nov 8 00:23:10.282386 containerd[1465]: time="2025-11-08T00:23:10.282292669Z" level=info msg="shim disconnected" id=325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea namespace=k8s.io Nov 8 00:23:10.282386 containerd[1465]: time="2025-11-08T00:23:10.282371106Z" level=warning msg="cleaning up after shim disconnected" id=325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea namespace=k8s.io Nov 8 00:23:10.282386 containerd[1465]: time="2025-11-08T00:23:10.282379993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:10.291011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5-rootfs.mount: Deactivated successfully. Nov 8 00:23:10.298428 containerd[1465]: time="2025-11-08T00:23:10.298349259Z" level=info msg="shim disconnected" id=b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5 namespace=k8s.io Nov 8 00:23:10.298829 containerd[1465]: time="2025-11-08T00:23:10.298694045Z" level=warning msg="cleaning up after shim disconnected" id=b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5 namespace=k8s.io Nov 8 00:23:10.298829 containerd[1465]: time="2025-11-08T00:23:10.298708903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:10.308242 containerd[1465]: time="2025-11-08T00:23:10.308191675Z" level=info msg="StopContainer for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" returns successfully" Nov 8 00:23:10.309175 containerd[1465]: time="2025-11-08T00:23:10.308965537Z" level=info msg="StopPodSandbox for \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\"" Nov 8 00:23:10.309175 containerd[1465]: time="2025-11-08T00:23:10.308999732Z" level=info msg="Container to stop \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:23:10.309175 containerd[1465]: time="2025-11-08T00:23:10.309012676Z" level=info msg="Container to stop \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:23:10.309175 containerd[1465]: time="2025-11-08T00:23:10.309022915Z" level=info msg="Container to stop \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:23:10.309175 containerd[1465]: time="2025-11-08T00:23:10.309033725Z" level=info msg="Container to stop \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:23:10.309175 containerd[1465]: time="2025-11-08T00:23:10.309043343Z" level=info msg="Container to stop \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:23:10.316441 systemd[1]: cri-containerd-f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0.scope: Deactivated successfully. Nov 8 00:23:10.332870 containerd[1465]: time="2025-11-08T00:23:10.332796807Z" level=info msg="TearDown network for sandbox \"b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5\" successfully" Nov 8 00:23:10.332870 containerd[1465]: time="2025-11-08T00:23:10.332862039Z" level=info msg="StopPodSandbox for \"b4066538d92a329cb083a36df5dd7437e20a2a4774b0a8d2a8ff964f50e6b6d5\" returns successfully" Nov 8 00:23:10.350855 containerd[1465]: time="2025-11-08T00:23:10.350493643Z" level=info msg="shim disconnected" id=f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0 namespace=k8s.io Nov 8 00:23:10.350855 containerd[1465]: time="2025-11-08T00:23:10.350572651Z" level=warning msg="cleaning up after shim disconnected" id=f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0 namespace=k8s.io Nov 8 00:23:10.350855 containerd[1465]: time="2025-11-08T00:23:10.350582160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:10.373864 containerd[1465]: time="2025-11-08T00:23:10.373803105Z" level=info msg="TearDown network for sandbox \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" successfully" Nov 8 00:23:10.373864 containerd[1465]: time="2025-11-08T00:23:10.373852958Z" level=info msg="StopPodSandbox for \"f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0\" returns successfully" Nov 8 00:23:10.445899 kubelet[2531]: I1108 00:23:10.445090 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20efe8c4-791b-4f39-8abf-12fa3cae44d2-cilium-config-path\") pod \"20efe8c4-791b-4f39-8abf-12fa3cae44d2\" (UID: \"20efe8c4-791b-4f39-8abf-12fa3cae44d2\") " Nov 8 00:23:10.445899 kubelet[2531]: I1108 00:23:10.445160 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpwp2\" (UniqueName: \"kubernetes.io/projected/20efe8c4-791b-4f39-8abf-12fa3cae44d2-kube-api-access-tpwp2\") pod \"20efe8c4-791b-4f39-8abf-12fa3cae44d2\" (UID: \"20efe8c4-791b-4f39-8abf-12fa3cae44d2\") " Nov 8 00:23:10.449563 kubelet[2531]: I1108 00:23:10.449450 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20efe8c4-791b-4f39-8abf-12fa3cae44d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20efe8c4-791b-4f39-8abf-12fa3cae44d2" (UID: "20efe8c4-791b-4f39-8abf-12fa3cae44d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:23:10.449911 kubelet[2531]: I1108 00:23:10.449826 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20efe8c4-791b-4f39-8abf-12fa3cae44d2-kube-api-access-tpwp2" (OuterVolumeSpecName: "kube-api-access-tpwp2") pod "20efe8c4-791b-4f39-8abf-12fa3cae44d2" (UID: "20efe8c4-791b-4f39-8abf-12fa3cae44d2"). InnerVolumeSpecName "kube-api-access-tpwp2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:23:10.545975 kubelet[2531]: I1108 00:23:10.545890 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-cgroup\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.545975 kubelet[2531]: I1108 00:23:10.545960 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-hubble-tls\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.545975 kubelet[2531]: I1108 00:23:10.545977 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-run\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.545975 kubelet[2531]: I1108 00:23:10.545994 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-lib-modules\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.545975 kubelet[2531]: I1108 00:23:10.546007 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-net\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546422 kubelet[2531]: I1108 00:23:10.546024 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-etc-cni-netd\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546422 kubelet[2531]: I1108 00:23:10.546039 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-xtables-lock\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546422 kubelet[2531]: I1108 00:23:10.546070 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffmbg\" (UniqueName: \"kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-kube-api-access-ffmbg\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546422 kubelet[2531]: I1108 00:23:10.546091 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-config-path\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546422 kubelet[2531]: I1108 00:23:10.546074 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.546422 kubelet[2531]: I1108 00:23:10.546109 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-kernel\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546627 kubelet[2531]: I1108 00:23:10.546162 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.546627 kubelet[2531]: I1108 00:23:10.546209 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-bpf-maps\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546627 kubelet[2531]: I1108 00:23:10.546239 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cni-path\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546627 kubelet[2531]: I1108 00:23:10.546270 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb481ab-ab2e-47e7-9282-afd10fac9545-clustermesh-secrets\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546627 kubelet[2531]: I1108 00:23:10.546295 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-hostproc\") pod \"9fb481ab-ab2e-47e7-9282-afd10fac9545\" (UID: \"9fb481ab-ab2e-47e7-9282-afd10fac9545\") " Nov 8 00:23:10.546627 kubelet[2531]: I1108 00:23:10.546321 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.546899 kubelet[2531]: I1108 00:23:10.546383 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-hostproc" (OuterVolumeSpecName: "hostproc") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.546899 kubelet[2531]: I1108 00:23:10.546363 2531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpwp2\" (UniqueName: \"kubernetes.io/projected/20efe8c4-791b-4f39-8abf-12fa3cae44d2-kube-api-access-tpwp2\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.546899 kubelet[2531]: I1108 00:23:10.546407 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.546899 kubelet[2531]: I1108 00:23:10.546429 2531 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.546899 kubelet[2531]: I1108 00:23:10.546444 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.546899 kubelet[2531]: I1108 00:23:10.546460 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20efe8c4-791b-4f39-8abf-12fa3cae44d2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.547131 kubelet[2531]: I1108 00:23:10.546429 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cni-path" (OuterVolumeSpecName: "cni-path") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.547131 kubelet[2531]: I1108 00:23:10.546493 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.547131 kubelet[2531]: I1108 00:23:10.546513 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.549905 kubelet[2531]: I1108 00:23:10.548839 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.549905 kubelet[2531]: I1108 00:23:10.549834 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:23:10.550064 kubelet[2531]: I1108 00:23:10.550038 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:23:10.550958 kubelet[2531]: I1108 00:23:10.550903 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb481ab-ab2e-47e7-9282-afd10fac9545-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:23:10.551159 kubelet[2531]: I1108 00:23:10.551133 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-kube-api-access-ffmbg" (OuterVolumeSpecName: "kube-api-access-ffmbg") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "kube-api-access-ffmbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:23:10.553126 kubelet[2531]: I1108 00:23:10.553037 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fb481ab-ab2e-47e7-9282-afd10fac9545" (UID: "9fb481ab-ab2e-47e7-9282-afd10fac9545"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:23:10.647721 kubelet[2531]: I1108 00:23:10.647648 2531 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb481ab-ab2e-47e7-9282-afd10fac9545-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.647721 kubelet[2531]: I1108 00:23:10.647734 2531 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647748 2531 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647800 2531 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647811 2531 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647822 2531 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647833 2531 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647846 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647856 2531 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648003 kubelet[2531]: I1108 00:23:10.647868 2531 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb481ab-ab2e-47e7-9282-afd10fac9545-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648219 kubelet[2531]: I1108 00:23:10.647880 2531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffmbg\" (UniqueName: \"kubernetes.io/projected/9fb481ab-ab2e-47e7-9282-afd10fac9545-kube-api-access-ffmbg\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.648219 kubelet[2531]: I1108 00:23:10.647893 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb481ab-ab2e-47e7-9282-afd10fac9545-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:23:10.991442 kubelet[2531]: E1108 00:23:10.991386 2531 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:23:11.183117 systemd[1]: var-lib-kubelet-pods-20efe8c4\x2d791b\x2d4f39\x2d8abf\x2d12fa3cae44d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtpwp2.mount: Deactivated successfully. Nov 8 00:23:11.183253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0-rootfs.mount: Deactivated successfully. Nov 8 00:23:11.183342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1bb303be2935e23464c2cdd841d9f4788cb672d142373595fcec1b9fd8798e0-shm.mount: Deactivated successfully. Nov 8 00:23:11.183467 systemd[1]: var-lib-kubelet-pods-9fb481ab\x2dab2e\x2d47e7\x2d9282\x2dafd10fac9545-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffmbg.mount: Deactivated successfully. Nov 8 00:23:11.184381 kubelet[2531]: I1108 00:23:11.184289 2531 scope.go:117] "RemoveContainer" containerID="325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea" Nov 8 00:23:11.183568 systemd[1]: var-lib-kubelet-pods-9fb481ab\x2dab2e\x2d47e7\x2d9282\x2dafd10fac9545-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 8 00:23:11.183670 systemd[1]: var-lib-kubelet-pods-9fb481ab\x2dab2e\x2d47e7\x2d9282\x2dafd10fac9545-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 8 00:23:11.189543 containerd[1465]: time="2025-11-08T00:23:11.189310174Z" level=info msg="RemoveContainer for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\"" Nov 8 00:23:11.191227 systemd[1]: Removed slice kubepods-burstable-pod9fb481ab_ab2e_47e7_9282_afd10fac9545.slice - libcontainer container kubepods-burstable-pod9fb481ab_ab2e_47e7_9282_afd10fac9545.slice. Nov 8 00:23:11.191360 systemd[1]: kubepods-burstable-pod9fb481ab_ab2e_47e7_9282_afd10fac9545.slice: Consumed 7.450s CPU time. Nov 8 00:23:11.196540 systemd[1]: Removed slice kubepods-besteffort-pod20efe8c4_791b_4f39_8abf_12fa3cae44d2.slice - libcontainer container kubepods-besteffort-pod20efe8c4_791b_4f39_8abf_12fa3cae44d2.slice. Nov 8 00:23:11.429012 containerd[1465]: time="2025-11-08T00:23:11.428951192Z" level=info msg="RemoveContainer for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" returns successfully" Nov 8 00:23:11.429398 kubelet[2531]: I1108 00:23:11.429342 2531 scope.go:117] "RemoveContainer" containerID="2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1" Nov 8 00:23:11.430800 containerd[1465]: time="2025-11-08T00:23:11.430738023Z" level=info msg="RemoveContainer for \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\"" Nov 8 00:23:11.640794 containerd[1465]: time="2025-11-08T00:23:11.640682221Z" level=info msg="RemoveContainer for \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\" returns successfully" Nov 8 00:23:11.641053 kubelet[2531]: I1108 00:23:11.641006 2531 scope.go:117] "RemoveContainer" containerID="3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898" Nov 8 00:23:11.642485 containerd[1465]: time="2025-11-08T00:23:11.642454114Z" level=info msg="RemoveContainer for \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\"" Nov 8 00:23:11.823472 containerd[1465]: time="2025-11-08T00:23:11.823407648Z" level=info msg="RemoveContainer for \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\" returns successfully" Nov 8 00:23:11.823923 kubelet[2531]: I1108 00:23:11.823775 2531 scope.go:117] "RemoveContainer" containerID="f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2" Nov 8 00:23:11.824954 containerd[1465]: time="2025-11-08T00:23:11.824913372Z" level=info msg="RemoveContainer for \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\"" Nov 8 00:23:11.845744 containerd[1465]: time="2025-11-08T00:23:11.845662191Z" level=info msg="RemoveContainer for \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\" returns successfully" Nov 8 00:23:11.846031 kubelet[2531]: I1108 00:23:11.845993 2531 scope.go:117] "RemoveContainer" containerID="afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774" Nov 8 00:23:11.847013 containerd[1465]: time="2025-11-08T00:23:11.846983360Z" level=info msg="RemoveContainer for \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\"" Nov 8 00:23:11.851189 containerd[1465]: time="2025-11-08T00:23:11.851151566Z" level=info msg="RemoveContainer for \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\" returns successfully" Nov 8 00:23:11.851341 kubelet[2531]: I1108 00:23:11.851308 2531 scope.go:117] "RemoveContainer" containerID="325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea" Nov 8 00:23:11.854995 containerd[1465]: time="2025-11-08T00:23:11.854942685Z" level=error msg="ContainerStatus for \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\": not found" Nov 8 00:23:11.864698 kubelet[2531]: E1108 00:23:11.864641 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\": not found" containerID="325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea" Nov 8 00:23:11.864864 kubelet[2531]: I1108 00:23:11.864694 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea"} err="failed to get container status \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\": rpc error: code = NotFound desc = an error occurred when try to find container \"325aa9d65923cc093c9d92e64c6c60a4908faf7e07e24be3a301ee4051475fea\": not found" Nov 8 00:23:11.864864 kubelet[2531]: I1108 00:23:11.864831 2531 scope.go:117] "RemoveContainer" containerID="2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1" Nov 8 00:23:11.865234 containerd[1465]: time="2025-11-08T00:23:11.865181976Z" level=error msg="ContainerStatus for \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\": not found" Nov 8 00:23:11.865384 kubelet[2531]: E1108 00:23:11.865349 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\": not found" containerID="2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1" Nov 8 00:23:11.865414 kubelet[2531]: I1108 00:23:11.865380 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1"} err="failed to get container status \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2dc5557861bd0b846f74fa0786be66ddd7663e9222ed63074b5f61521f94ebc1\": not found" Nov 8 00:23:11.865414 kubelet[2531]: I1108 00:23:11.865399 2531 scope.go:117] "RemoveContainer" containerID="3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898" Nov 8 00:23:11.865611 containerd[1465]: time="2025-11-08T00:23:11.865576696Z" level=error msg="ContainerStatus for \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\": not found" Nov 8 00:23:11.865726 kubelet[2531]: E1108 00:23:11.865703 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\": not found" containerID="3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898" Nov 8 00:23:11.865785 kubelet[2531]: I1108 00:23:11.865727 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898"} err="failed to get container status \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bd4cb942cda7a41b7b4a6964f4894aa7c0e0ac6911592bf3d8a159440e28898\": not found" Nov 8 00:23:11.865785 kubelet[2531]: I1108 00:23:11.865774 2531 scope.go:117] "RemoveContainer" containerID="f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2" Nov 8 00:23:11.865976 containerd[1465]: time="2025-11-08T00:23:11.865938725Z" level=error msg="ContainerStatus for \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\": not found" Nov 8 00:23:11.866086 kubelet[2531]: E1108 00:23:11.866055 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\": not found" containerID="f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2" Nov 8 00:23:11.866165 kubelet[2531]: I1108 00:23:11.866081 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2"} err="failed to get container status \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f137e984bd4e9e96bd50af0a9fb4f8fee866b37beb20f0e3ebfbec51ffb914e2\": not found" Nov 8 00:23:11.866165 kubelet[2531]: I1108 00:23:11.866111 2531 scope.go:117] "RemoveContainer" containerID="afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774" Nov 8 00:23:11.866301 containerd[1465]: time="2025-11-08T00:23:11.866267021Z" level=error msg="ContainerStatus for \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\": not found" Nov 8 00:23:11.866393 kubelet[2531]: E1108 00:23:11.866365 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\": not found" containerID="afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774" Nov 8 00:23:11.866443 kubelet[2531]: I1108 00:23:11.866390 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774"} err="failed to get container status \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\": rpc error: code = NotFound desc = an error occurred when try to find container \"afff9e592328ece89a1626cb266fa00e6ea0703c68fa2a1b43fd1bd9420e9774\": not found" Nov 8 00:23:11.866443 kubelet[2531]: I1108 00:23:11.866408 2531 scope.go:117] "RemoveContainer" containerID="a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f" Nov 8 00:23:11.867325 containerd[1465]: time="2025-11-08T00:23:11.867298195Z" level=info msg="RemoveContainer for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\"" Nov 8 00:23:11.871924 containerd[1465]: time="2025-11-08T00:23:11.871877011Z" level=info msg="RemoveContainer for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" returns successfully" Nov 8 00:23:11.872241 kubelet[2531]: I1108 00:23:11.872116 2531 scope.go:117] "RemoveContainer" containerID="a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f" Nov 8 00:23:11.872621 containerd[1465]: time="2025-11-08T00:23:11.872558269Z" level=error msg="ContainerStatus for \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\": not found" Nov 8 00:23:11.872765 kubelet[2531]: E1108 00:23:11.872709 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\": not found" containerID="a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f" Nov 8 00:23:11.872900 kubelet[2531]: I1108 00:23:11.872775 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f"} err="failed to get container status \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a60215d610930761edf23eeb1ccedc2393a1f20ba22e82d61d5268798326663f\": not found" Nov 8 00:23:11.939222 kubelet[2531]: I1108 00:23:11.939158 2531 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20efe8c4-791b-4f39-8abf-12fa3cae44d2" path="/var/lib/kubelet/pods/20efe8c4-791b-4f39-8abf-12fa3cae44d2/volumes" Nov 8 00:23:11.939979 kubelet[2531]: I1108 00:23:11.939930 2531 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb481ab-ab2e-47e7-9282-afd10fac9545" path="/var/lib/kubelet/pods/9fb481ab-ab2e-47e7-9282-afd10fac9545/volumes" Nov 8 00:23:12.073694 sshd[4197]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:12.088872 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:33728.service: Deactivated successfully. Nov 8 00:23:12.091218 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:23:12.092822 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:23:12.098139 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:33740.service - OpenSSH per-connection server daemon (10.0.0.1:33740). Nov 8 00:23:12.100111 systemd-logind[1451]: Removed session 27. Nov 8 00:23:12.132031 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 33740 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:23:12.134186 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:12.138640 systemd-logind[1451]: New session 28 of user core. Nov 8 00:23:12.145931 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 8 00:23:12.554025 sshd[4361]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:12.565011 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:33740.service: Deactivated successfully. Nov 8 00:23:12.568860 systemd[1]: session-28.scope: Deactivated successfully. Nov 8 00:23:12.569833 kubelet[2531]: I1108 00:23:12.569797 2531 memory_manager.go:355] "RemoveStaleState removing state" podUID="20efe8c4-791b-4f39-8abf-12fa3cae44d2" containerName="cilium-operator" Nov 8 00:23:12.569833 kubelet[2531]: I1108 00:23:12.569830 2531 memory_manager.go:355] "RemoveStaleState removing state" podUID="9fb481ab-ab2e-47e7-9282-afd10fac9545" containerName="cilium-agent" Nov 8 00:23:12.574405 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Nov 8 00:23:12.588204 systemd[1]: Started sshd@28-10.0.0.54:22-10.0.0.1:33748.service - OpenSSH per-connection server daemon (10.0.0.1:33748). Nov 8 00:23:12.591826 systemd-logind[1451]: Removed session 28. Nov 8 00:23:12.596793 systemd[1]: Created slice kubepods-burstable-podb62e312c_c0d1_4529_b125_74c6525ab6ae.slice - libcontainer container kubepods-burstable-podb62e312c_c0d1_4529_b125_74c6525ab6ae.slice. Nov 8 00:23:12.632541 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 33748 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:23:12.634249 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:12.638056 systemd-logind[1451]: New session 29 of user core. Nov 8 00:23:12.646922 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 8 00:23:12.700164 sshd[4374]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:12.718528 systemd[1]: sshd@28-10.0.0.54:22-10.0.0.1:33748.service: Deactivated successfully. Nov 8 00:23:12.720670 systemd[1]: session-29.scope: Deactivated successfully. Nov 8 00:23:12.722514 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Nov 8 00:23:12.728190 systemd[1]: Started sshd@29-10.0.0.54:22-10.0.0.1:33760.service - OpenSSH per-connection server daemon (10.0.0.1:33760). Nov 8 00:23:12.729530 systemd-logind[1451]: Removed session 29. Nov 8 00:23:12.760734 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 33760 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:23:12.763014 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:12.764141 kubelet[2531]: I1108 00:23:12.764108 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b62e312c-c0d1-4529-b125-74c6525ab6ae-cilium-config-path\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764498 kubelet[2531]: I1108 00:23:12.764168 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b62e312c-c0d1-4529-b125-74c6525ab6ae-hubble-tls\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764498 kubelet[2531]: I1108 00:23:12.764197 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-lib-modules\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764498 kubelet[2531]: I1108 00:23:12.764219 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-xtables-lock\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764498 kubelet[2531]: I1108 00:23:12.764247 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-cni-path\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764498 kubelet[2531]: I1108 00:23:12.764269 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b62e312c-c0d1-4529-b125-74c6525ab6ae-clustermesh-secrets\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764498 kubelet[2531]: I1108 00:23:12.764290 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-host-proc-sys-kernel\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764724 kubelet[2531]: I1108 00:23:12.764309 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-cilium-cgroup\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764724 kubelet[2531]: I1108 00:23:12.764331 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b62e312c-c0d1-4529-b125-74c6525ab6ae-cilium-ipsec-secrets\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764724 kubelet[2531]: I1108 00:23:12.764352 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-host-proc-sys-net\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764724 kubelet[2531]: I1108 00:23:12.764375 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-cilium-run\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764724 kubelet[2531]: I1108 00:23:12.764395 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-etc-cni-netd\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764724 kubelet[2531]: I1108 00:23:12.764441 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-bpf-maps\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764987 kubelet[2531]: I1108 00:23:12.764464 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b62e312c-c0d1-4529-b125-74c6525ab6ae-hostproc\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.764987 kubelet[2531]: I1108 00:23:12.764487 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqkf\" (UniqueName: \"kubernetes.io/projected/b62e312c-c0d1-4529-b125-74c6525ab6ae-kube-api-access-6rqkf\") pod \"cilium-rjtt4\" (UID: \"b62e312c-c0d1-4529-b125-74c6525ab6ae\") " pod="kube-system/cilium-rjtt4" Nov 8 00:23:12.768879 systemd-logind[1451]: New session 30 of user core. Nov 8 00:23:12.774927 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 8 00:23:12.902218 kubelet[2531]: E1108 00:23:12.901991 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:12.904410 containerd[1465]: time="2025-11-08T00:23:12.904315145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rjtt4,Uid:b62e312c-c0d1-4529-b125-74c6525ab6ae,Namespace:kube-system,Attempt:0,}" Nov 8 00:23:12.928213 containerd[1465]: time="2025-11-08T00:23:12.928074951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:23:12.928213 containerd[1465]: time="2025-11-08T00:23:12.928150162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:23:12.928436 containerd[1465]: time="2025-11-08T00:23:12.928200316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:12.929243 containerd[1465]: time="2025-11-08T00:23:12.929189330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:12.947922 systemd[1]: Started cri-containerd-8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b.scope - libcontainer container 8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b. Nov 8 00:23:12.979385 containerd[1465]: time="2025-11-08T00:23:12.979281285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rjtt4,Uid:b62e312c-c0d1-4529-b125-74c6525ab6ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\"" Nov 8 00:23:12.980773 kubelet[2531]: E1108 00:23:12.980271 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:12.983901 containerd[1465]: time="2025-11-08T00:23:12.983735468Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:23:13.440995 containerd[1465]: time="2025-11-08T00:23:13.440741959Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f\"" Nov 8 00:23:13.441663 containerd[1465]: time="2025-11-08T00:23:13.441608514Z" level=info msg="StartContainer for \"87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f\"" Nov 8 00:23:13.474970 systemd[1]: Started cri-containerd-87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f.scope - libcontainer container 87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f. Nov 8 00:23:13.506181 containerd[1465]: time="2025-11-08T00:23:13.506105462Z" level=info msg="StartContainer for \"87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f\" returns successfully" Nov 8 00:23:13.517969 systemd[1]: cri-containerd-87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f.scope: Deactivated successfully. Nov 8 00:23:13.559572 containerd[1465]: time="2025-11-08T00:23:13.559242377Z" level=info msg="shim disconnected" id=87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f namespace=k8s.io Nov 8 00:23:13.559572 containerd[1465]: time="2025-11-08T00:23:13.559320343Z" level=warning msg="cleaning up after shim disconnected" id=87cdf3e708847e6447959b70c40fbd66d22a3a0bf0283dd58c49425c7a56ed6f namespace=k8s.io Nov 8 00:23:13.559572 containerd[1465]: time="2025-11-08T00:23:13.559339950Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:14.199092 kubelet[2531]: E1108 00:23:14.198360 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:14.200802 containerd[1465]: time="2025-11-08T00:23:14.200741247Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:23:14.217181 containerd[1465]: time="2025-11-08T00:23:14.217128547Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa\"" Nov 8 00:23:14.218048 containerd[1465]: time="2025-11-08T00:23:14.217971387Z" level=info msg="StartContainer for \"42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa\"" Nov 8 00:23:14.256993 systemd[1]: Started cri-containerd-42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa.scope - libcontainer container 42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa. Nov 8 00:23:14.288882 containerd[1465]: time="2025-11-08T00:23:14.288829373Z" level=info msg="StartContainer for \"42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa\" returns successfully" Nov 8 00:23:14.297666 systemd[1]: cri-containerd-42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa.scope: Deactivated successfully. Nov 8 00:23:14.323085 containerd[1465]: time="2025-11-08T00:23:14.322996805Z" level=info msg="shim disconnected" id=42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa namespace=k8s.io Nov 8 00:23:14.323085 containerd[1465]: time="2025-11-08T00:23:14.323068229Z" level=warning msg="cleaning up after shim disconnected" id=42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa namespace=k8s.io Nov 8 00:23:14.323085 containerd[1465]: time="2025-11-08T00:23:14.323080572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:14.870750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42a9d92fc3e900e92b7600b49c0cf5c56496d975abeb0d10fdafc514bf7eccfa-rootfs.mount: Deactivated successfully. Nov 8 00:23:15.202306 kubelet[2531]: E1108 00:23:15.202033 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:15.208795 containerd[1465]: time="2025-11-08T00:23:15.207346437Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:23:15.250409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208444154.mount: Deactivated successfully. Nov 8 00:23:15.255249 containerd[1465]: time="2025-11-08T00:23:15.255170548Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1\"" Nov 8 00:23:15.257818 containerd[1465]: time="2025-11-08T00:23:15.255949910Z" level=info msg="StartContainer for \"a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1\"" Nov 8 00:23:15.293101 systemd[1]: Started cri-containerd-a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1.scope - libcontainer container a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1. Nov 8 00:23:15.328345 containerd[1465]: time="2025-11-08T00:23:15.328288432Z" level=info msg="StartContainer for \"a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1\" returns successfully" Nov 8 00:23:15.329769 systemd[1]: cri-containerd-a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1.scope: Deactivated successfully. Nov 8 00:23:15.358953 containerd[1465]: time="2025-11-08T00:23:15.358864660Z" level=info msg="shim disconnected" id=a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1 namespace=k8s.io Nov 8 00:23:15.358953 containerd[1465]: time="2025-11-08T00:23:15.358942205Z" level=warning msg="cleaning up after shim disconnected" id=a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1 namespace=k8s.io Nov 8 00:23:15.358953 containerd[1465]: time="2025-11-08T00:23:15.358957193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:15.870380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a450b672700271fd6a67b866230c14d05cc5b8da8bf6967975fb3783f7256ed1-rootfs.mount: Deactivated successfully. Nov 8 00:23:15.936691 kubelet[2531]: E1108 00:23:15.936610 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:15.992664 kubelet[2531]: E1108 00:23:15.992602 2531 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:23:16.206714 kubelet[2531]: E1108 00:23:16.206539 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:16.208859 containerd[1465]: time="2025-11-08T00:23:16.208798770Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:23:16.232255 containerd[1465]: time="2025-11-08T00:23:16.232182381Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5\"" Nov 8 00:23:16.234223 containerd[1465]: time="2025-11-08T00:23:16.232926566Z" level=info msg="StartContainer for \"88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5\"" Nov 8 00:23:16.272053 systemd[1]: Started cri-containerd-88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5.scope - libcontainer container 88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5. Nov 8 00:23:16.300136 systemd[1]: cri-containerd-88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5.scope: Deactivated successfully. Nov 8 00:23:16.306323 containerd[1465]: time="2025-11-08T00:23:16.301409537Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb62e312c_c0d1_4529_b125_74c6525ab6ae.slice/cri-containerd-88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5.scope/memory.events\": no such file or directory" Nov 8 00:23:16.307212 containerd[1465]: time="2025-11-08T00:23:16.307032081Z" level=info msg="StartContainer for \"88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5\" returns successfully" Nov 8 00:23:16.335254 containerd[1465]: time="2025-11-08T00:23:16.335163143Z" level=info msg="shim disconnected" id=88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5 namespace=k8s.io Nov 8 00:23:16.335254 containerd[1465]: time="2025-11-08T00:23:16.335232674Z" level=warning msg="cleaning up after shim disconnected" id=88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5 namespace=k8s.io Nov 8 00:23:16.335254 containerd[1465]: time="2025-11-08T00:23:16.335241991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:16.870675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88538cb2ff7a71534ee34f834a316508468ccf028dc500198f545d23b20e18e5-rootfs.mount: Deactivated successfully. Nov 8 00:23:16.936204 kubelet[2531]: E1108 00:23:16.936138 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:17.210847 kubelet[2531]: E1108 00:23:17.210676 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:17.212472 containerd[1465]: time="2025-11-08T00:23:17.212422493Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:23:17.229937 containerd[1465]: time="2025-11-08T00:23:17.229869140Z" level=info msg="CreateContainer within sandbox \"8adc46426bcc88fab9c0c30bb0665ddd49db64d40038befd84297ceddb41626b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dfe0eb776094e7183f63d50b473da36b3edf290f4377ea82531dfd7ba048ec66\"" Nov 8 00:23:17.231735 containerd[1465]: time="2025-11-08T00:23:17.230557721Z" level=info msg="StartContainer for \"dfe0eb776094e7183f63d50b473da36b3edf290f4377ea82531dfd7ba048ec66\"" Nov 8 00:23:17.272049 systemd[1]: Started cri-containerd-dfe0eb776094e7183f63d50b473da36b3edf290f4377ea82531dfd7ba048ec66.scope - libcontainer container dfe0eb776094e7183f63d50b473da36b3edf290f4377ea82531dfd7ba048ec66. Nov 8 00:23:17.308859 containerd[1465]: time="2025-11-08T00:23:17.308802397Z" level=info msg="StartContainer for \"dfe0eb776094e7183f63d50b473da36b3edf290f4377ea82531dfd7ba048ec66\" returns successfully" Nov 8 00:23:17.786798 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 8 00:23:17.948873 kubelet[2531]: I1108 00:23:17.948790 2531 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T00:23:17Z","lastTransitionTime":"2025-11-08T00:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 8 00:23:18.256104 kubelet[2531]: E1108 00:23:18.255978 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:18.567059 kubelet[2531]: I1108 00:23:18.566979 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rjtt4" podStartSLOduration=6.566956969 podStartE2EDuration="6.566956969s" podCreationTimestamp="2025-11-08 00:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:23:18.56677071 +0000 UTC m=+92.722459217" watchObservedRunningTime="2025-11-08 00:23:18.566956969 +0000 UTC m=+92.722645456" Nov 8 00:23:19.258940 kubelet[2531]: E1108 00:23:19.258349 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:20.259991 kubelet[2531]: E1108 00:23:20.259945 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:20.935897 kubelet[2531]: E1108 00:23:20.935845 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:21.186352 systemd-networkd[1395]: lxc_health: Link UP Nov 8 00:23:21.195024 systemd-networkd[1395]: lxc_health: Gained carrier Nov 8 00:23:22.540125 systemd-networkd[1395]: lxc_health: Gained IPv6LL Nov 8 00:23:22.904374 kubelet[2531]: E1108 00:23:22.904203 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:23.265592 kubelet[2531]: E1108 00:23:23.265552 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:23.768808 kubelet[2531]: E1108 00:23:23.768717 2531 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:36020->127.0.0.1:32841: read tcp 127.0.0.1:36020->127.0.0.1:32841: read: connection reset by peer Nov 8 00:23:23.768986 kubelet[2531]: E1108 00:23:23.768639 2531 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36020->127.0.0.1:32841: write tcp 127.0.0.1:36020->127.0.0.1:32841: write: broken pipe Nov 8 00:23:24.267099 kubelet[2531]: E1108 00:23:24.267075 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:24.935791 kubelet[2531]: E1108 00:23:24.935343 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:25.938501 kubelet[2531]: E1108 00:23:25.938451 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:28.225509 sshd[4382]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.229801 systemd[1]: sshd@29-10.0.0.54:22-10.0.0.1:33760.service: Deactivated successfully. Nov 8 00:23:28.232116 systemd[1]: session-30.scope: Deactivated successfully. Nov 8 00:23:28.232984 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Nov 8 00:23:28.234031 systemd-logind[1451]: Removed session 30.