Nov 24 00:28:00.844880 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:54:38 -00 2025 Nov 24 00:28:00.844900 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:28:00.844911 kernel: BIOS-provided physical RAM map: Nov 24 00:28:00.844918 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 24 00:28:00.844924 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 24 00:28:00.844931 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 00:28:00.844939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 24 00:28:00.844946 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 24 00:28:00.844952 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 24 00:28:00.844959 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 24 00:28:00.844974 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 00:28:00.844983 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 00:28:00.844989 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 00:28:00.844996 kernel: NX (Execute Disable) protection: active Nov 24 00:28:00.845004 kernel: APIC: Static calls initialized Nov 24 00:28:00.845011 kernel: SMBIOS 2.8 present. Nov 24 00:28:00.845021 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 24 00:28:00.845028 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:28:00.845036 kernel: Hypervisor detected: KVM Nov 24 00:28:00.845043 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 24 00:28:00.845050 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:28:00.845057 kernel: kvm-clock: using sched offset of 3740786961 cycles Nov 24 00:28:00.845064 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:28:00.845072 kernel: tsc: Detected 2794.748 MHz processor Nov 24 00:28:00.845079 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:28:00.845087 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:28:00.845097 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 24 00:28:00.845104 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 00:28:00.845111 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:28:00.845119 kernel: Using GB pages for direct mapping Nov 24 00:28:00.845126 kernel: ACPI: Early table checksum verification disabled Nov 24 00:28:00.845133 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 24 00:28:00.845141 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845148 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845155 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845165 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 24 00:28:00.845172 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845179 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845186 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845194 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:28:00.845204 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 24 00:28:00.845211 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 24 00:28:00.845221 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 24 00:28:00.845228 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 24 00:28:00.845236 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 24 00:28:00.845243 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 24 00:28:00.845250 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 24 00:28:00.845258 kernel: No NUMA configuration found Nov 24 00:28:00.845265 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 24 00:28:00.845275 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 24 00:28:00.845282 kernel: Zone ranges: Nov 24 00:28:00.845290 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:28:00.845297 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 24 00:28:00.845305 kernel: Normal empty Nov 24 00:28:00.845312 kernel: Device empty Nov 24 00:28:00.845319 kernel: Movable zone start for each node Nov 24 00:28:00.845326 kernel: Early memory node ranges Nov 24 00:28:00.845335 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 00:28:00.845344 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 24 00:28:00.845355 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 24 00:28:00.845363 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:28:00.845371 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 00:28:00.845378 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 24 00:28:00.845385 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 00:28:00.845393 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:28:00.845400 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:28:00.845408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 00:28:00.845415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:28:00.845424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:28:00.845432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:28:00.845439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:28:00.847186 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:28:00.847200 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:28:00.847207 kernel: TSC deadline timer available Nov 24 00:28:00.847215 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:28:00.847222 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:28:00.847230 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:28:00.847241 kernel: CPU topo: Max. threads per core: 1 Nov 24 00:28:00.847249 kernel: CPU topo: Num. cores per package: 4 Nov 24 00:28:00.847256 kernel: CPU topo: Num. threads per package: 4 Nov 24 00:28:00.847263 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 24 00:28:00.847271 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:28:00.847279 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 24 00:28:00.847286 kernel: kvm-guest: setup PV sched yield Nov 24 00:28:00.847293 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 24 00:28:00.847301 kernel: Booting paravirtualized kernel on KVM Nov 24 00:28:00.847308 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:28:00.847318 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 24 00:28:00.847326 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 24 00:28:00.847334 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 24 00:28:00.847341 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 24 00:28:00.847348 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:28:00.847355 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:28:00.847364 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:28:00.847372 kernel: random: crng init done Nov 24 00:28:00.847382 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:28:00.847389 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:28:00.847397 kernel: Fallback order for Node 0: 0 Nov 24 00:28:00.847404 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 24 00:28:00.847412 kernel: Policy zone: DMA32 Nov 24 00:28:00.847419 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:28:00.847427 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 24 00:28:00.847434 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:28:00.847442 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:28:00.847471 kernel: Dynamic Preempt: voluntary Nov 24 00:28:00.847478 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:28:00.847487 kernel: rcu: RCU event tracing is enabled. Nov 24 00:28:00.847494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 24 00:28:00.847502 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:28:00.847509 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:28:00.847517 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:28:00.847524 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:28:00.847532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 24 00:28:00.847542 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 00:28:00.847549 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 00:28:00.847557 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 00:28:00.847564 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 24 00:28:00.847572 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:28:00.847587 kernel: Console: colour VGA+ 80x25 Nov 24 00:28:00.847597 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:28:00.847605 kernel: ACPI: Core revision 20240827 Nov 24 00:28:00.847613 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 00:28:00.847621 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:28:00.847628 kernel: x2apic enabled Nov 24 00:28:00.847636 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:28:00.847646 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 24 00:28:00.847654 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 24 00:28:00.847662 kernel: kvm-guest: setup PV IPIs Nov 24 00:28:00.847669 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 00:28:00.847677 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 24 00:28:00.847687 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 24 00:28:00.847695 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:28:00.847703 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 24 00:28:00.847710 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 24 00:28:00.847718 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:28:00.847726 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:28:00.847734 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:28:00.847741 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 24 00:28:00.847749 kernel: active return thunk: retbleed_return_thunk Nov 24 00:28:00.847759 kernel: RETBleed: Mitigation: untrained return thunk Nov 24 00:28:00.847767 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 00:28:00.847775 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 00:28:00.847782 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 24 00:28:00.847791 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 24 00:28:00.847799 kernel: active return thunk: srso_return_thunk Nov 24 00:28:00.847806 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 24 00:28:00.847814 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:28:00.847824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:28:00.847832 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:28:00.847840 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:28:00.847848 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 24 00:28:00.847855 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:28:00.847863 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:28:00.847871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:28:00.847878 kernel: landlock: Up and running. Nov 24 00:28:00.847886 kernel: SELinux: Initializing. Nov 24 00:28:00.847896 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:28:00.847904 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:28:00.847912 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 24 00:28:00.847920 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 24 00:28:00.847928 kernel: ... version: 0 Nov 24 00:28:00.847935 kernel: ... bit width: 48 Nov 24 00:28:00.847943 kernel: ... generic registers: 6 Nov 24 00:28:00.847951 kernel: ... value mask: 0000ffffffffffff Nov 24 00:28:00.847958 kernel: ... max period: 00007fffffffffff Nov 24 00:28:00.847976 kernel: ... fixed-purpose events: 0 Nov 24 00:28:00.847984 kernel: ... event mask: 000000000000003f Nov 24 00:28:00.847991 kernel: signal: max sigframe size: 1776 Nov 24 00:28:00.847999 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:28:00.848007 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:28:00.848014 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:28:00.848022 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:28:00.848030 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:28:00.848037 kernel: .... node #0, CPUs: #1 #2 #3 Nov 24 00:28:00.848049 kernel: smp: Brought up 1 node, 4 CPUs Nov 24 00:28:00.848057 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 24 00:28:00.848065 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145096K reserved, 0K cma-reserved) Nov 24 00:28:00.848073 kernel: devtmpfs: initialized Nov 24 00:28:00.848081 kernel: x86/mm: Memory block size: 128MB Nov 24 00:28:00.848089 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:28:00.848097 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 24 00:28:00.848104 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:28:00.848112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:28:00.848122 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:28:00.848130 kernel: audit: type=2000 audit(1763944078.473:1): state=initialized audit_enabled=0 res=1 Nov 24 00:28:00.848138 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:28:00.848146 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:28:00.848153 kernel: cpuidle: using governor menu Nov 24 00:28:00.848161 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:28:00.848169 kernel: dca service started, version 1.12.1 Nov 24 00:28:00.848177 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 24 00:28:00.848184 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 24 00:28:00.848194 kernel: PCI: Using configuration type 1 for base access Nov 24 00:28:00.848202 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:28:00.848210 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:28:00.848217 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:28:00.848225 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:28:00.848233 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:28:00.848240 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:28:00.848248 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:28:00.848256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:28:00.848266 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:28:00.848273 kernel: ACPI: Interpreter enabled Nov 24 00:28:00.848281 kernel: ACPI: PM: (supports S0 S3 S5) Nov 24 00:28:00.848288 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:28:00.848297 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:28:00.848304 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:28:00.848312 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 00:28:00.848320 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:28:00.848527 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:28:00.848654 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 24 00:28:00.848772 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 24 00:28:00.848782 kernel: PCI host bridge to bus 0000:00 Nov 24 00:28:00.848903 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:28:00.849021 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:28:00.849127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:28:00.849266 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 24 00:28:00.849386 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 24 00:28:00.849508 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 24 00:28:00.849615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:28:00.849753 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:28:00.849880 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:28:00.850013 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 24 00:28:00.850129 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 24 00:28:00.850243 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 24 00:28:00.850358 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:28:00.850520 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 24 00:28:00.850641 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 24 00:28:00.850757 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 24 00:28:00.850879 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 24 00:28:00.851015 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 00:28:00.851133 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 24 00:28:00.851249 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 24 00:28:00.851364 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 24 00:28:00.851508 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 00:28:00.851627 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 24 00:28:00.851748 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 24 00:28:00.851866 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 24 00:28:00.851992 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 24 00:28:00.852122 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:28:00.852238 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 00:28:00.852363 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 00:28:00.852501 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 24 00:28:00.852622 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 24 00:28:00.852746 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 00:28:00.852861 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 24 00:28:00.852872 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:28:00.852880 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:28:00.852888 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:28:00.852896 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:28:00.852907 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 00:28:00.852915 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 00:28:00.852923 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 00:28:00.852931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 00:28:00.852938 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 00:28:00.852946 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 00:28:00.852953 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 00:28:00.852961 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 00:28:00.852980 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 00:28:00.852990 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 00:28:00.852998 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 00:28:00.853006 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 00:28:00.853014 kernel: iommu: Default domain type: Translated Nov 24 00:28:00.853021 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:28:00.853029 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:28:00.853037 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:28:00.853045 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 24 00:28:00.853053 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 24 00:28:00.853173 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 00:28:00.853289 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 00:28:00.853407 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:28:00.853419 kernel: vgaarb: loaded Nov 24 00:28:00.853427 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 00:28:00.853434 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 00:28:00.853442 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:28:00.853478 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:28:00.853486 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:28:00.853498 kernel: pnp: PnP ACPI init Nov 24 00:28:00.853636 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 24 00:28:00.853648 kernel: pnp: PnP ACPI: found 6 devices Nov 24 00:28:00.853656 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:28:00.853664 kernel: NET: Registered PF_INET protocol family Nov 24 00:28:00.853672 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:28:00.853680 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 24 00:28:00.853688 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:28:00.853699 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:28:00.853707 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 24 00:28:00.853715 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 24 00:28:00.853722 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:28:00.853730 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:28:00.853738 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:28:00.853746 kernel: NET: Registered PF_XDP protocol family Nov 24 00:28:00.853863 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:28:00.853980 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:28:00.854099 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:28:00.854211 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 24 00:28:00.854316 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 24 00:28:00.854434 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 24 00:28:00.854444 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:28:00.854467 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 24 00:28:00.854475 kernel: Initialise system trusted keyrings Nov 24 00:28:00.854483 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 24 00:28:00.854495 kernel: Key type asymmetric registered Nov 24 00:28:00.854502 kernel: Asymmetric key parser 'x509' registered Nov 24 00:28:00.854510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:28:00.854519 kernel: io scheduler mq-deadline registered Nov 24 00:28:00.854527 kernel: io scheduler kyber registered Nov 24 00:28:00.854534 kernel: io scheduler bfq registered Nov 24 00:28:00.854542 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:28:00.854551 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 00:28:00.854559 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 00:28:00.854570 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 24 00:28:00.854578 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:28:00.854586 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:28:00.854594 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:28:00.854602 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:28:00.854610 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:28:00.854733 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 24 00:28:00.854744 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 00:28:00.857531 kernel: rtc_cmos 00:04: registered as rtc0 Nov 24 00:28:00.857650 kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T00:28:00 UTC (1763944080) Nov 24 00:28:00.857759 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 24 00:28:00.857771 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 24 00:28:00.857779 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:28:00.857787 kernel: Segment Routing with IPv6 Nov 24 00:28:00.857796 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:28:00.857804 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:28:00.857812 kernel: Key type dns_resolver registered Nov 24 00:28:00.857823 kernel: IPI shorthand broadcast: enabled Nov 24 00:28:00.857832 kernel: sched_clock: Marking stable (2714001967, 200352836)->(3027010645, -112655842) Nov 24 00:28:00.857840 kernel: registered taskstats version 1 Nov 24 00:28:00.857848 kernel: Loading compiled-in X.509 certificates Nov 24 00:28:00.857856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 5d380f93d180914be04be8068ab300f495c35900' Nov 24 00:28:00.857864 kernel: Demotion targets for Node 0: null Nov 24 00:28:00.857872 kernel: Key type .fscrypt registered Nov 24 00:28:00.857880 kernel: Key type fscrypt-provisioning registered Nov 24 00:28:00.857888 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:28:00.857898 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:28:00.857907 kernel: ima: No architecture policies found Nov 24 00:28:00.857914 kernel: clk: Disabling unused clocks Nov 24 00:28:00.857922 kernel: Warning: unable to open an initial console. Nov 24 00:28:00.857931 kernel: Freeing unused kernel image (initmem) memory: 46188K Nov 24 00:28:00.857939 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:28:00.857947 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:28:00.857955 kernel: Run /init as init process Nov 24 00:28:00.857972 kernel: with arguments: Nov 24 00:28:00.857983 kernel: /init Nov 24 00:28:00.857991 kernel: with environment: Nov 24 00:28:00.857999 kernel: HOME=/ Nov 24 00:28:00.858007 kernel: TERM=linux Nov 24 00:28:00.858016 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:28:00.858028 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:28:00.858050 systemd[1]: Detected virtualization kvm. Nov 24 00:28:00.858060 systemd[1]: Detected architecture x86-64. Nov 24 00:28:00.858068 systemd[1]: Running in initrd. Nov 24 00:28:00.858077 systemd[1]: No hostname configured, using default hostname. Nov 24 00:28:00.858086 systemd[1]: Hostname set to . Nov 24 00:28:00.858094 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:28:00.858103 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:28:00.858112 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:28:00.858123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:28:00.858132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:28:00.858141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:28:00.858150 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:28:00.858160 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:28:00.858170 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:28:00.858182 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:28:00.858191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:28:00.858202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:28:00.858210 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:28:00.858219 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:28:00.858228 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:28:00.858237 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:28:00.858246 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:28:00.858254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:28:00.858266 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:28:00.858274 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:28:00.858283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:28:00.858291 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:28:00.858300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:28:00.858309 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:28:00.858318 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:28:00.858329 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:28:00.858338 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:28:00.858347 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:28:00.858356 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:28:00.858365 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:28:00.858373 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:28:00.858382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:28:00.858393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:28:00.858402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:28:00.858411 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:28:00.858440 systemd-journald[201]: Collecting audit messages is disabled. Nov 24 00:28:00.858478 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:28:00.858488 systemd-journald[201]: Journal started Nov 24 00:28:00.858509 systemd-journald[201]: Runtime Journal (/run/log/journal/9b96516c0d23416998b22f777103def2) is 6M, max 48.3M, 42.2M free. Nov 24 00:28:00.843567 systemd-modules-load[202]: Inserted module 'overlay' Nov 24 00:28:00.928516 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:28:00.928550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:28:00.928566 kernel: Bridge firewalling registered Nov 24 00:28:00.874088 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 24 00:28:00.927414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:28:00.929021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:28:00.933512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:28:00.940591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:28:00.943565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:28:00.947033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:28:00.963314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:28:00.971636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:28:00.974034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:28:00.975718 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:28:00.981618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:28:00.984471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:28:00.997634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:28:01.002857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:28:01.037047 systemd-resolved[237]: Positive Trust Anchors: Nov 24 00:28:01.037063 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:28:01.037092 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:28:01.039735 systemd-resolved[237]: Defaulting to hostname 'linux'. Nov 24 00:28:01.040808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:28:01.041257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:28:01.064203 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:28:01.172491 kernel: SCSI subsystem initialized Nov 24 00:28:01.182487 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:28:01.193489 kernel: iscsi: registered transport (tcp) Nov 24 00:28:01.214897 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:28:01.214949 kernel: QLogic iSCSI HBA Driver Nov 24 00:28:01.236978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:28:01.266762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:28:01.268563 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:28:01.324312 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:28:01.325855 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:28:01.383493 kernel: raid6: avx2x4 gen() 30536 MB/s Nov 24 00:28:01.400477 kernel: raid6: avx2x2 gen() 31059 MB/s Nov 24 00:28:01.418200 kernel: raid6: avx2x1 gen() 25937 MB/s Nov 24 00:28:01.418217 kernel: raid6: using algorithm avx2x2 gen() 31059 MB/s Nov 24 00:28:01.436211 kernel: raid6: .... xor() 19943 MB/s, rmw enabled Nov 24 00:28:01.436254 kernel: raid6: using avx2x2 recovery algorithm Nov 24 00:28:01.457499 kernel: xor: automatically using best checksumming function avx Nov 24 00:28:01.622483 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:28:01.630485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:28:01.633810 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:28:01.659439 systemd-udevd[454]: Using default interface naming scheme 'v255'. Nov 24 00:28:01.664714 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:28:01.665521 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:28:01.696610 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Nov 24 00:28:01.723376 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:28:01.724839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:28:01.804793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:28:01.811151 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:28:01.841468 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 24 00:28:01.846805 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 24 00:28:01.856313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:28:01.856365 kernel: GPT:9289727 != 19775487 Nov 24 00:28:01.856376 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:28:01.856386 kernel: GPT:9289727 != 19775487 Nov 24 00:28:01.856396 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:28:01.856412 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:28:01.856423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:28:01.870473 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 00:28:01.873496 kernel: AES CTR mode by8 optimization enabled Nov 24 00:28:01.879520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:28:01.879643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:28:01.887148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:28:01.901028 kernel: libata version 3.00 loaded. Nov 24 00:28:01.894786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:28:01.898836 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:28:01.916875 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 00:28:01.917112 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 00:28:01.920574 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 00:28:01.920736 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 00:28:01.920878 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 00:28:01.925478 kernel: scsi host0: ahci Nov 24 00:28:01.927474 kernel: scsi host1: ahci Nov 24 00:28:01.928475 kernel: scsi host2: ahci Nov 24 00:28:01.928629 kernel: scsi host3: ahci Nov 24 00:28:01.928772 kernel: scsi host4: ahci Nov 24 00:28:01.929478 kernel: scsi host5: ahci Nov 24 00:28:01.929647 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Nov 24 00:28:01.929659 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Nov 24 00:28:01.929669 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Nov 24 00:28:01.929679 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Nov 24 00:28:01.929689 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Nov 24 00:28:01.929699 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Nov 24 00:28:01.940627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 00:28:02.012179 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 00:28:02.012528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:28:02.027963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 00:28:02.034736 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 00:28:02.034818 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 00:28:02.043971 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:28:02.075560 disk-uuid[618]: Primary Header is updated. Nov 24 00:28:02.075560 disk-uuid[618]: Secondary Entries is updated. Nov 24 00:28:02.075560 disk-uuid[618]: Secondary Header is updated. Nov 24 00:28:02.080627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:28:02.235878 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 00:28:02.235945 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 00:28:02.237170 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 00:28:02.237486 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 00:28:02.241061 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 24 00:28:02.241115 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 00:28:02.241127 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 24 00:28:02.242228 kernel: ata3.00: applying bridge limits Nov 24 00:28:02.244346 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 00:28:02.244360 kernel: ata3.00: configured for UDMA/100 Nov 24 00:28:02.246479 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 00:28:02.248481 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 24 00:28:02.294250 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 24 00:28:02.294525 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 00:28:02.320488 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 24 00:28:02.692752 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:28:02.695204 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:28:02.698772 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:28:02.700876 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:28:02.705690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:28:02.739137 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:28:03.088160 disk-uuid[619]: The operation has completed successfully. Nov 24 00:28:03.090519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:28:03.117248 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:28:03.117359 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:28:03.159445 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:28:03.183410 sh[647]: Success Nov 24 00:28:03.203632 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:28:03.203679 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:28:03.205469 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:28:03.214507 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:28:03.243229 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:28:03.246976 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:28:03.262128 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:28:03.272224 kernel: BTRFS: device fsid c993ebd2-0e38-4cfc-8615-2c75294bea72 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (659) Nov 24 00:28:03.272258 kernel: BTRFS info (device dm-0): first mount of filesystem c993ebd2-0e38-4cfc-8615-2c75294bea72 Nov 24 00:28:03.272272 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:28:03.276758 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:28:03.276811 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:28:03.277955 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:28:03.280003 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:28:03.281047 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:28:03.282141 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:28:03.289346 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:28:03.324867 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (692) Nov 24 00:28:03.324941 kernel: BTRFS info (device vda6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:28:03.324958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:28:03.330811 kernel: BTRFS info (device vda6): turning on async discard Nov 24 00:28:03.330871 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 00:28:03.336485 kernel: BTRFS info (device vda6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:28:03.337354 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:28:03.341788 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:28:03.429622 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:28:03.434690 ignition[741]: Ignition 2.22.0 Nov 24 00:28:03.434703 ignition[741]: Stage: fetch-offline Nov 24 00:28:03.434734 ignition[741]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:28:03.436567 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:28:03.434755 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:28:03.434837 ignition[741]: parsed url from cmdline: "" Nov 24 00:28:03.434840 ignition[741]: no config URL provided Nov 24 00:28:03.434845 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:28:03.434853 ignition[741]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:28:03.434876 ignition[741]: op(1): [started] loading QEMU firmware config module Nov 24 00:28:03.434881 ignition[741]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 24 00:28:03.455227 ignition[741]: op(1): [finished] loading QEMU firmware config module Nov 24 00:28:03.486413 systemd-networkd[836]: lo: Link UP Nov 24 00:28:03.486425 systemd-networkd[836]: lo: Gained carrier Nov 24 00:28:03.488087 systemd-networkd[836]: Enumeration completed Nov 24 00:28:03.488234 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:28:03.488441 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:28:03.488445 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:28:03.489557 systemd-networkd[836]: eth0: Link UP Nov 24 00:28:03.489759 systemd-networkd[836]: eth0: Gained carrier Nov 24 00:28:03.489768 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:28:03.491697 systemd[1]: Reached target network.target - Network. Nov 24 00:28:03.521516 systemd-networkd[836]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 00:28:03.555661 ignition[741]: parsing config with SHA512: 9300e6b993e84201a9fc8878401f6293003fc5d93ea5bb1513d8ac426a256c1ffb77a1197a3fd6e9168d248253cff2c5051d344805eed861f9b50a5718730c44 Nov 24 00:28:03.560583 unknown[741]: fetched base config from "system" Nov 24 00:28:03.560596 unknown[741]: fetched user config from "qemu" Nov 24 00:28:03.561067 ignition[741]: fetch-offline: fetch-offline passed Nov 24 00:28:03.564021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:28:03.561115 ignition[741]: Ignition finished successfully Nov 24 00:28:03.567731 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 24 00:28:03.568561 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:28:03.616470 ignition[841]: Ignition 2.22.0 Nov 24 00:28:03.616480 ignition[841]: Stage: kargs Nov 24 00:28:03.616600 ignition[841]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:28:03.616610 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:28:03.617262 ignition[841]: kargs: kargs passed Nov 24 00:28:03.617302 ignition[841]: Ignition finished successfully Nov 24 00:28:03.625301 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:28:03.627915 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:28:03.664658 ignition[849]: Ignition 2.22.0 Nov 24 00:28:03.664670 ignition[849]: Stage: disks Nov 24 00:28:03.664829 ignition[849]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:28:03.664838 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:28:03.667620 ignition[849]: disks: disks passed Nov 24 00:28:03.667666 ignition[849]: Ignition finished successfully Nov 24 00:28:03.675184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:28:03.677254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:28:03.678623 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:28:03.682071 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:28:03.685859 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:28:03.688937 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:28:03.691165 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:28:03.731832 systemd-fsck[859]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:28:04.154238 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:28:04.158014 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:28:04.390479 kernel: EXT4-fs (vda9): mounted filesystem 5d9d0447-100f-4769-adb5-76fdba966eb2 r/w with ordered data mode. Quota mode: none. Nov 24 00:28:04.390646 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:28:04.393718 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:28:04.398342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:28:04.401836 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:28:04.404818 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:28:04.404864 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:28:04.404895 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:28:04.418489 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:28:04.422354 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:28:04.430293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Nov 24 00:28:04.430313 kernel: BTRFS info (device vda6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:28:04.430324 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:28:04.432741 kernel: BTRFS info (device vda6): turning on async discard Nov 24 00:28:04.432760 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 00:28:04.433968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:28:04.456666 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:28:04.461119 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:28:04.466461 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:28:04.470390 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:28:04.550641 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:28:04.566167 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:28:04.568714 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:28:04.588261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:28:04.590768 kernel: BTRFS info (device vda6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:28:04.602643 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:28:04.621388 ignition[980]: INFO : Ignition 2.22.0 Nov 24 00:28:04.621388 ignition[980]: INFO : Stage: mount Nov 24 00:28:04.623961 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:28:04.623961 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:28:04.623961 ignition[980]: INFO : mount: mount passed Nov 24 00:28:04.623961 ignition[980]: INFO : Ignition finished successfully Nov 24 00:28:04.631270 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:28:04.635308 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:28:05.393032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:28:05.411629 systemd-networkd[836]: eth0: Gained IPv6LL Nov 24 00:28:05.423487 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Nov 24 00:28:05.426669 kernel: BTRFS info (device vda6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:28:05.426697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:28:05.430959 kernel: BTRFS info (device vda6): turning on async discard Nov 24 00:28:05.430993 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 00:28:05.432765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:28:05.471421 ignition[1010]: INFO : Ignition 2.22.0 Nov 24 00:28:05.471421 ignition[1010]: INFO : Stage: files Nov 24 00:28:05.474743 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:28:05.474743 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:28:05.474743 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:28:05.474743 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:28:05.474743 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:28:05.474743 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:28:05.492566 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:28:05.492566 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:28:05.492566 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:28:05.492566 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:28:05.475367 unknown[1010]: wrote ssh authorized keys file for user: core Nov 24 00:28:05.526789 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:28:05.599482 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:28:05.615266 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 24 00:28:05.615266 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 24 00:28:05.816760 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 24 00:28:05.952026 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 24 00:28:05.952026 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:28:05.957723 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:28:05.957723 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:28:05.963416 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:28:05.963416 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:28:05.963416 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:28:05.963416 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:28:05.963416 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:28:05.978970 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:28:05.978970 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:28:05.978970 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:28:05.978970 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:28:05.978970 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:28:05.978970 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:28:06.329920 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 24 00:28:06.657009 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:28:06.657009 ignition[1010]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 24 00:28:06.663394 ignition[1010]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:28:06.686496 ignition[1010]: INFO : files: files passed Nov 24 00:28:06.686496 ignition[1010]: INFO : Ignition finished successfully Nov 24 00:28:06.687527 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:28:06.691947 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:28:06.695788 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:28:06.725821 initrd-setup-root-after-ignition[1038]: grep: /sysroot/oem/oem-release: No such file or directory Nov 24 00:28:06.707987 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:28:06.729614 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:28:06.729614 initrd-setup-root-after-ignition[1040]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:28:06.708158 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:28:06.736503 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:28:06.715921 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:28:06.718028 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:28:06.720561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:28:06.761725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:28:06.761867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:28:06.765522 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:28:06.767387 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:28:06.772098 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:28:06.773751 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:28:06.802556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:28:06.806668 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:28:06.840412 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:28:06.840586 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:28:06.841147 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:28:06.847492 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:28:06.847600 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:28:06.853504 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:28:06.855348 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:28:06.859910 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:28:06.861337 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:28:06.866330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:28:06.868138 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:28:06.868958 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:28:06.876414 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:28:06.878068 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:28:06.883397 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:28:06.885042 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:28:06.888030 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:28:06.888137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:28:06.893427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:28:06.896764 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:28:06.900321 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:28:06.903729 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:28:06.903872 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:28:06.903974 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:28:06.911013 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:28:06.911125 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:28:06.914744 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:28:06.917862 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:28:06.918001 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:28:06.921714 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:28:06.922086 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:28:06.927374 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:28:06.927479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:28:06.931301 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:28:06.931382 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:28:06.934353 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:28:06.934478 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:28:06.937731 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:28:06.937839 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:28:06.941555 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:28:06.945076 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:28:06.945168 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:28:06.945276 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:28:06.946070 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:28:06.946160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:28:06.961201 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:28:06.961393 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:28:06.977640 ignition[1065]: INFO : Ignition 2.22.0 Nov 24 00:28:06.977640 ignition[1065]: INFO : Stage: umount Nov 24 00:28:06.980518 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:28:06.980518 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:28:06.980518 ignition[1065]: INFO : umount: umount passed Nov 24 00:28:06.980518 ignition[1065]: INFO : Ignition finished successfully Nov 24 00:28:06.981105 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:28:06.981276 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:28:06.982982 systemd[1]: Stopped target network.target - Network. Nov 24 00:28:06.983372 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:28:06.983466 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:28:06.983984 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:28:06.984041 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:28:06.984243 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:28:06.984305 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:28:06.984782 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:28:06.984846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:28:06.985193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:28:06.985428 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:28:06.990940 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:28:06.999343 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:28:06.999569 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:28:07.014120 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:28:07.014793 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:28:07.014898 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:28:07.020674 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:28:07.020969 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:28:07.021104 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:28:07.025567 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:28:07.026166 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:28:07.045131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:28:07.045192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:28:07.060014 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:28:07.060098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:28:07.060150 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:28:07.068715 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:28:07.068790 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:28:07.073685 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:28:07.073736 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:28:07.074112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:28:07.079510 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:28:07.102282 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:28:07.102484 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:28:07.104101 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:28:07.104143 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:28:07.107738 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:28:07.107772 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:28:07.110967 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:28:07.111017 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:28:07.118663 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:28:07.118715 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:28:07.123281 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:28:07.123332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:28:07.131779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:28:07.131867 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:28:07.131920 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:28:07.138847 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:28:07.138893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:28:07.144684 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 24 00:28:07.144730 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:28:07.150385 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:28:07.150432 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:28:07.152699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:28:07.152746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:28:07.160075 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:28:07.164377 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:28:07.170970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:28:07.171094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:28:07.223318 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:28:07.223511 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:28:07.227101 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:28:07.227913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:28:07.227986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:28:07.236030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:28:07.267506 systemd[1]: Switching root. Nov 24 00:28:07.313913 systemd-journald[201]: Journal stopped Nov 24 00:28:08.737742 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 24 00:28:08.737833 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:28:08.737848 kernel: SELinux: policy capability open_perms=1 Nov 24 00:28:08.737859 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:28:08.737876 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:28:08.737892 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:28:08.737907 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:28:08.737919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:28:08.737930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:28:08.737941 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:28:08.737955 kernel: audit: type=1403 audit(1763944087.869:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:28:08.737967 systemd[1]: Successfully loaded SELinux policy in 66.142ms. Nov 24 00:28:08.737996 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.959ms. Nov 24 00:28:08.738010 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:28:08.738023 systemd[1]: Detected virtualization kvm. Nov 24 00:28:08.738039 systemd[1]: Detected architecture x86-64. Nov 24 00:28:08.738050 systemd[1]: Detected first boot. Nov 24 00:28:08.738062 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:28:08.738074 zram_generator::config[1111]: No configuration found. Nov 24 00:28:08.738088 kernel: Guest personality initialized and is inactive Nov 24 00:28:08.738099 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:28:08.738116 kernel: Initialized host personality Nov 24 00:28:08.738127 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:28:08.738141 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:28:08.738153 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:28:08.738165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:28:08.738177 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:28:08.738189 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:28:08.738201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:28:08.738213 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:28:08.738225 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:28:08.738237 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:28:08.738251 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:28:08.738263 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:28:08.738275 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:28:08.738287 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:28:08.738299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:28:08.738311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:28:08.738324 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:28:08.738336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:28:08.738349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:28:08.738364 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:28:08.738376 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:28:08.738388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:28:08.738400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:28:08.738412 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:28:08.738424 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:28:08.738436 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:28:08.738463 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:28:08.738476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:28:08.738488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:28:08.738500 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:28:08.738512 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:28:08.738524 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:28:08.738536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:28:08.738547 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:28:08.738559 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:28:08.738573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:28:08.738588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:28:08.738599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:28:08.738611 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:28:08.738623 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:28:08.738635 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:28:08.738647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:08.738659 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:28:08.738671 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:28:08.738685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:28:08.738698 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:28:08.738710 systemd[1]: Reached target machines.target - Containers. Nov 24 00:28:08.738722 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:28:08.738734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:28:08.738746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:28:08.738766 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:28:08.738778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:28:08.738791 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:28:08.738804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:28:08.738816 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:28:08.738828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:28:08.738842 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:28:08.738854 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:28:08.738866 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:28:08.738877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:28:08.738890 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:28:08.738904 kernel: fuse: init (API version 7.41) Nov 24 00:28:08.738917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:28:08.738929 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:28:08.738940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:28:08.738953 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:28:08.738965 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:28:08.738977 kernel: ACPI: bus type drm_connector registered Nov 24 00:28:08.738989 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:28:08.739001 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:28:08.739015 kernel: loop: module loaded Nov 24 00:28:08.739026 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:28:08.739044 systemd[1]: Stopped verity-setup.service. Nov 24 00:28:08.739056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:08.739106 systemd-journald[1193]: Collecting audit messages is disabled. Nov 24 00:28:08.739131 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:28:08.739143 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:28:08.739157 systemd-journald[1193]: Journal started Nov 24 00:28:08.739180 systemd-journald[1193]: Runtime Journal (/run/log/journal/9b96516c0d23416998b22f777103def2) is 6M, max 48.3M, 42.2M free. Nov 24 00:28:08.411209 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:28:08.431434 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 00:28:08.431910 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:28:08.742476 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:28:08.745120 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:28:08.747086 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:28:08.749132 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:28:08.751133 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:28:08.753299 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:28:08.756119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:28:08.758856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:28:08.759142 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:28:08.761817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:28:08.762083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:28:08.764638 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:28:08.764918 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:28:08.767275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:28:08.767572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:28:08.770254 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:28:08.770536 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:28:08.773252 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:28:08.773656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:28:08.775920 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:28:08.778325 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:28:08.781021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:28:08.783888 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:28:08.798369 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:28:08.802408 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:28:08.805733 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:28:08.807965 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:28:08.807994 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:28:08.808978 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:28:08.826522 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:28:08.828966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:28:08.830545 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:28:08.833617 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:28:08.835884 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:28:08.836806 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:28:08.839107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:28:08.841592 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:28:08.847222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:28:08.851055 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:28:08.854401 systemd-journald[1193]: Time spent on flushing to /var/log/journal/9b96516c0d23416998b22f777103def2 is 14.640ms for 984 entries. Nov 24 00:28:08.854401 systemd-journald[1193]: System Journal (/var/log/journal/9b96516c0d23416998b22f777103def2) is 8M, max 195.6M, 187.6M free. Nov 24 00:28:08.896822 systemd-journald[1193]: Received client request to flush runtime journal. Nov 24 00:28:08.896890 kernel: loop0: detected capacity change from 0 to 229808 Nov 24 00:28:08.896914 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:28:08.857712 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:28:08.860235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:28:08.862474 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:28:08.871974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:28:08.877790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:28:08.885362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:28:08.889049 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:28:08.903499 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:28:08.909790 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 24 00:28:08.909807 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 24 00:28:08.914729 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:28:08.919369 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:28:08.926482 kernel: loop1: detected capacity change from 0 to 128560 Nov 24 00:28:08.927797 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:28:08.962476 kernel: loop2: detected capacity change from 0 to 110984 Nov 24 00:28:08.969351 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:28:08.975049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:28:08.993470 kernel: loop3: detected capacity change from 0 to 229808 Nov 24 00:28:09.008485 kernel: loop4: detected capacity change from 0 to 128560 Nov 24 00:28:09.010161 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 24 00:28:09.010185 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 24 00:28:09.015944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:28:09.019516 kernel: loop5: detected capacity change from 0 to 110984 Nov 24 00:28:09.032992 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 24 00:28:09.033582 (sd-merge)[1256]: Merged extensions into '/usr'. Nov 24 00:28:09.038191 systemd[1]: Reload requested from client PID 1230 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:28:09.038206 systemd[1]: Reloading... Nov 24 00:28:09.094477 zram_generator::config[1280]: No configuration found. Nov 24 00:28:09.159402 ldconfig[1225]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:28:09.292659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:28:09.293068 systemd[1]: Reloading finished in 251 ms. Nov 24 00:28:09.323723 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:28:09.325962 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:28:09.345727 systemd[1]: Starting ensure-sysext.service... Nov 24 00:28:09.348211 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:28:09.363350 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:28:09.363390 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:28:09.363720 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:28:09.363982 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:28:09.364867 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:28:09.365135 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Nov 24 00:28:09.365205 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Nov 24 00:28:09.366318 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:28:09.366337 systemd[1]: Reloading... Nov 24 00:28:09.369389 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:28:09.369402 systemd-tmpfiles[1322]: Skipping /boot Nov 24 00:28:09.379175 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:28:09.379189 systemd-tmpfiles[1322]: Skipping /boot Nov 24 00:28:09.414494 zram_generator::config[1349]: No configuration found. Nov 24 00:28:09.605204 systemd[1]: Reloading finished in 238 ms. Nov 24 00:28:09.634616 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:28:09.655761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:28:09.665325 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:28:09.668430 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:28:09.680605 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:28:09.685395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:28:09.688634 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:28:09.691637 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:28:09.697233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:09.697399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:28:09.700517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:28:09.707642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:28:09.711144 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:28:09.713011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:28:09.713108 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:28:09.716624 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:28:09.718375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:09.719853 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:28:09.722555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:28:09.722768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:28:09.725108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:28:09.725296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:28:09.727916 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:28:09.728106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:28:09.740497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:09.740712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:28:09.744649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:28:09.749671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:28:09.754568 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Nov 24 00:28:09.754633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:28:09.756528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:28:09.756632 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:28:09.760536 augenrules[1424]: No rules Nov 24 00:28:09.764040 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:28:09.766668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:09.768245 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:28:09.769532 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:28:09.772859 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:28:09.778514 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:28:09.781139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:28:09.781417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:28:09.784076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:28:09.784343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:28:09.786935 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:28:09.787200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:28:09.789703 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:28:09.791917 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:28:09.808293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:28:09.812901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:09.814881 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:28:09.816511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:28:09.819817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:28:09.824744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:28:09.829071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:28:09.835752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:28:09.838652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:28:09.838766 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:28:09.842767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:28:09.844483 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:28:09.844578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:28:09.852774 systemd[1]: Finished ensure-sysext.service. Nov 24 00:28:09.854477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:28:09.854840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:28:09.859174 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:28:09.860711 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:28:09.873412 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 00:28:09.876227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:28:09.877881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:28:09.880380 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:28:09.880827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:28:09.885967 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:28:09.888566 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:28:09.889882 augenrules[1455]: /sbin/augenrules: No change Nov 24 00:28:09.898441 augenrules[1499]: No rules Nov 24 00:28:09.899100 systemd-resolved[1391]: Positive Trust Anchors: Nov 24 00:28:09.899324 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:28:09.899394 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:28:09.903008 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:28:09.903014 systemd-resolved[1391]: Defaulting to hostname 'linux'. Nov 24 00:28:09.903821 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:28:09.905698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:28:09.908126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:28:09.940371 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:28:10.003048 systemd-networkd[1468]: lo: Link UP Nov 24 00:28:10.003064 systemd-networkd[1468]: lo: Gained carrier Nov 24 00:28:10.004698 systemd-networkd[1468]: Enumeration completed Nov 24 00:28:10.004791 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:28:10.006998 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:28:10.007006 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:28:10.007386 systemd[1]: Reached target network.target - Network. Nov 24 00:28:10.009775 systemd-networkd[1468]: eth0: Link UP Nov 24 00:28:10.009985 systemd-networkd[1468]: eth0: Gained carrier Nov 24 00:28:10.010011 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:28:10.010991 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:28:10.015487 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:28:10.014050 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:28:10.020021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 00:28:10.028529 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 00:28:10.030708 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:28:10.031964 systemd-networkd[1468]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 00:28:10.035464 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:28:10.049756 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 00:28:10.051881 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:28:10.053880 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:28:10.056288 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:28:10.926570 systemd-resolved[1391]: Clock change detected. Flushing caches. Nov 24 00:28:10.926613 systemd-timesyncd[1480]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 24 00:28:10.926659 systemd-timesyncd[1480]: Initial clock synchronization to Mon 2025-11-24 00:28:10.926523 UTC. Nov 24 00:28:10.928226 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:28:10.932823 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 00:28:10.933082 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 00:28:10.932941 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:28:10.935068 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:28:10.935097 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:28:10.938192 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:28:10.940153 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:28:10.942095 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:28:10.944353 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:28:10.946762 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:28:10.950131 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:28:10.955131 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:28:10.957305 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:28:10.959271 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:28:10.963700 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:28:10.965617 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:28:10.968624 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:28:10.970968 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:28:10.973163 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:28:10.984833 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:28:10.986467 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:28:10.989121 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:28:10.989153 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:28:10.992159 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:28:10.994781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:28:10.998225 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:28:10.999311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:28:11.009811 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:28:11.011771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:28:11.016198 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:28:11.020616 jq[1542]: false Nov 24 00:28:11.024493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:28:11.027864 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:28:11.029119 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing passwd entry cache Nov 24 00:28:11.029090 oslogin_cache_refresh[1544]: Refreshing passwd entry cache Nov 24 00:28:11.031209 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:28:11.035320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:28:11.043200 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:28:11.050040 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting users, quitting Nov 24 00:28:11.050040 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:28:11.050040 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing group entry cache Nov 24 00:28:11.050210 extend-filesystems[1543]: Found /dev/vda6 Nov 24 00:28:11.043868 oslogin_cache_refresh[1544]: Failure getting users, quitting Nov 24 00:28:11.045731 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:28:11.055059 extend-filesystems[1543]: Found /dev/vda9 Nov 24 00:28:11.055059 extend-filesystems[1543]: Checking size of /dev/vda9 Nov 24 00:28:11.043888 oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:28:11.046221 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:28:11.061417 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting groups, quitting Nov 24 00:28:11.061417 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:28:11.043935 oslogin_cache_refresh[1544]: Refreshing group entry cache Nov 24 00:28:11.050968 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:28:11.055159 oslogin_cache_refresh[1544]: Failure getting groups, quitting Nov 24 00:28:11.059307 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:28:11.055171 oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:28:11.071055 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:28:11.073386 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:28:11.074350 extend-filesystems[1543]: Resized partition /dev/vda9 Nov 24 00:28:11.088403 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 24 00:28:11.073626 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:28:11.088525 extend-filesystems[1571]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:28:11.073935 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:28:11.074182 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:28:11.076307 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:28:11.076544 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:28:11.080922 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:28:11.081205 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:28:11.097499 jq[1566]: true Nov 24 00:28:11.105977 update_engine[1557]: I20251124 00:28:11.105917 1557 main.cc:92] Flatcar Update Engine starting Nov 24 00:28:11.111090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:28:11.121035 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 24 00:28:11.128129 tar[1572]: linux-amd64/LICENSE Nov 24 00:28:11.138782 jq[1582]: true Nov 24 00:28:11.141961 tar[1572]: linux-amd64/helm Nov 24 00:28:11.142041 dbus-daemon[1540]: [system] SELinux support is enabled Nov 24 00:28:11.142316 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 00:28:11.142316 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 24 00:28:11.142316 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 24 00:28:11.150638 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Nov 24 00:28:11.144711 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:28:11.155027 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:28:11.164951 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:28:11.165341 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:28:11.173209 update_engine[1557]: I20251124 00:28:11.173152 1557 update_check_scheduler.cc:74] Next update check in 8m35s Nov 24 00:28:11.249320 systemd-logind[1553]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:28:11.249347 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:28:11.249632 systemd-logind[1553]: New seat seat0. Nov 24 00:28:11.272231 kernel: kvm_amd: TSC scaling supported Nov 24 00:28:11.272365 kernel: kvm_amd: Nested Virtualization enabled Nov 24 00:28:11.272428 kernel: kvm_amd: Nested Paging enabled Nov 24 00:28:11.272464 kernel: kvm_amd: LBR virtualization supported Nov 24 00:28:11.272491 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 24 00:28:11.272518 kernel: kvm_amd: Virtual GIF supported Nov 24 00:28:11.272542 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:28:11.300040 kernel: EDAC MC: Ver: 3.0.0 Nov 24 00:28:11.306535 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:28:11.352631 containerd[1583]: time="2025-11-24T00:28:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:28:11.355730 containerd[1583]: time="2025-11-24T00:28:11.353336532Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:28:11.361915 containerd[1583]: time="2025-11-24T00:28:11.361866248Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.997µs" Nov 24 00:28:11.361915 containerd[1583]: time="2025-11-24T00:28:11.361903308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:28:11.361969 containerd[1583]: time="2025-11-24T00:28:11.361926611Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:28:11.362277 containerd[1583]: time="2025-11-24T00:28:11.362238727Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:28:11.362308 containerd[1583]: time="2025-11-24T00:28:11.362282599Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:28:11.362328 containerd[1583]: time="2025-11-24T00:28:11.362311303Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:28:11.362452 containerd[1583]: time="2025-11-24T00:28:11.362423453Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:28:11.362452 containerd[1583]: time="2025-11-24T00:28:11.362447398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:28:11.362784 containerd[1583]: time="2025-11-24T00:28:11.362756147Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:28:11.362816 containerd[1583]: time="2025-11-24T00:28:11.362784009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:28:11.362816 containerd[1583]: time="2025-11-24T00:28:11.362797314Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:28:11.362816 containerd[1583]: time="2025-11-24T00:28:11.362809136Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:28:11.363601 containerd[1583]: time="2025-11-24T00:28:11.362897783Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:28:11.363601 containerd[1583]: time="2025-11-24T00:28:11.363232861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:28:11.363601 containerd[1583]: time="2025-11-24T00:28:11.363277204Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:28:11.363601 containerd[1583]: time="2025-11-24T00:28:11.363293775Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:28:11.363601 containerd[1583]: time="2025-11-24T00:28:11.363323601Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:28:11.363601 containerd[1583]: time="2025-11-24T00:28:11.363595641Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:28:11.363722 containerd[1583]: time="2025-11-24T00:28:11.363665372Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:28:11.370044 containerd[1583]: time="2025-11-24T00:28:11.369997136Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:28:11.370083 containerd[1583]: time="2025-11-24T00:28:11.370062489Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:28:11.370083 containerd[1583]: time="2025-11-24T00:28:11.370076335Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:28:11.370140 containerd[1583]: time="2025-11-24T00:28:11.370087967Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:28:11.370140 containerd[1583]: time="2025-11-24T00:28:11.370100681Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:28:11.370140 containerd[1583]: time="2025-11-24T00:28:11.370119045Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:28:11.370140 containerd[1583]: time="2025-11-24T00:28:11.370131789Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:28:11.370211 containerd[1583]: time="2025-11-24T00:28:11.370143441Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:28:11.370211 containerd[1583]: time="2025-11-24T00:28:11.370160072Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:28:11.370211 containerd[1583]: time="2025-11-24T00:28:11.370172014Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:28:11.370211 containerd[1583]: time="2025-11-24T00:28:11.370180490Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:28:11.370211 containerd[1583]: time="2025-11-24T00:28:11.370192503Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:28:11.370347 containerd[1583]: time="2025-11-24T00:28:11.370324100Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:28:11.370429 containerd[1583]: time="2025-11-24T00:28:11.370349788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:28:11.370429 containerd[1583]: time="2025-11-24T00:28:11.370363564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:28:11.370429 containerd[1583]: time="2025-11-24T00:28:11.370384443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:28:11.370429 containerd[1583]: time="2025-11-24T00:28:11.370397387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:28:11.370429 containerd[1583]: time="2025-11-24T00:28:11.370412315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:28:11.370531 containerd[1583]: time="2025-11-24T00:28:11.370438735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:28:11.370531 containerd[1583]: time="2025-11-24T00:28:11.370463140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:28:11.370531 containerd[1583]: time="2025-11-24T00:28:11.370487486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:28:11.370531 containerd[1583]: time="2025-11-24T00:28:11.370510720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:28:11.370609 containerd[1583]: time="2025-11-24T00:28:11.370552067Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:28:11.370653 containerd[1583]: time="2025-11-24T00:28:11.370626867Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:28:11.370735 containerd[1583]: time="2025-11-24T00:28:11.370658266Z" level=info msg="Start snapshots syncer" Nov 24 00:28:11.370735 containerd[1583]: time="2025-11-24T00:28:11.370703561Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:28:11.371058 containerd[1583]: time="2025-11-24T00:28:11.370993245Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:28:11.371165 containerd[1583]: time="2025-11-24T00:28:11.371068426Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:28:11.372605 containerd[1583]: time="2025-11-24T00:28:11.372575342Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:28:11.372736 containerd[1583]: time="2025-11-24T00:28:11.372705305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:28:11.372764 containerd[1583]: time="2025-11-24T00:28:11.372737736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:28:11.372764 containerd[1583]: time="2025-11-24T00:28:11.372749438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:28:11.372764 containerd[1583]: time="2025-11-24T00:28:11.372759467Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:28:11.372831 containerd[1583]: time="2025-11-24T00:28:11.372771239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:28:11.372831 containerd[1583]: time="2025-11-24T00:28:11.372781719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:28:11.372831 containerd[1583]: time="2025-11-24T00:28:11.372792198Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:28:11.372831 containerd[1583]: time="2025-11-24T00:28:11.372814310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:28:11.372831 containerd[1583]: time="2025-11-24T00:28:11.372824489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372834618Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372860787Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372872900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372881255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372893228Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372903607Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:28:11.372922 containerd[1583]: time="2025-11-24T00:28:11.372922442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:28:11.373068 containerd[1583]: time="2025-11-24T00:28:11.372944985Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:28:11.373068 containerd[1583]: time="2025-11-24T00:28:11.372965513Z" level=info msg="runtime interface created" Nov 24 00:28:11.373068 containerd[1583]: time="2025-11-24T00:28:11.372971234Z" level=info msg="created NRI interface" Nov 24 00:28:11.373068 containerd[1583]: time="2025-11-24T00:28:11.372979239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:28:11.373068 containerd[1583]: time="2025-11-24T00:28:11.372989238Z" level=info msg="Connect containerd service" Nov 24 00:28:11.373068 containerd[1583]: time="2025-11-24T00:28:11.373021739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:28:11.373757 containerd[1583]: time="2025-11-24T00:28:11.373724286Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:28:11.453443 containerd[1583]: time="2025-11-24T00:28:11.453334273Z" level=info msg="Start subscribing containerd event" Nov 24 00:28:11.453443 containerd[1583]: time="2025-11-24T00:28:11.453420786Z" level=info msg="Start recovering state" Nov 24 00:28:11.453594 containerd[1583]: time="2025-11-24T00:28:11.453567741Z" level=info msg="Start event monitor" Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453611173Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453626551Z" level=info msg="Start streaming server" Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453621652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453740134Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453643724Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453767205Z" level=info msg="runtime interface starting up..." Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453772806Z" level=info msg="starting plugins..." Nov 24 00:28:11.453904 containerd[1583]: time="2025-11-24T00:28:11.453792964Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:28:11.454071 containerd[1583]: time="2025-11-24T00:28:11.453931483Z" level=info msg="containerd successfully booted in 0.101767s" Nov 24 00:28:11.458431 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:28:11.460654 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:28:11.462848 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:28:11.464916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:28:11.467189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:28:11.481763 dbus-daemon[1540]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 00:28:11.495039 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:28:11.504420 tar[1572]: linux-amd64/README.md Nov 24 00:28:11.506480 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:28:11.508309 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 00:28:11.508513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:28:11.508667 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:28:11.510757 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:28:11.510905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:28:11.530227 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:28:11.543568 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:28:11.543852 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:28:11.549373 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:28:11.553791 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:28:11.567986 locksmithd[1644]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:28:11.573885 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:28:11.577527 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:28:11.580345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:28:11.582293 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:28:12.105212 systemd-networkd[1468]: eth0: Gained IPv6LL Nov 24 00:28:12.107996 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:28:12.110633 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:28:12.113882 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 24 00:28:12.117545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:28:12.127281 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:28:12.147566 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 24 00:28:12.147838 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 24 00:28:12.150412 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:28:12.153121 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:28:12.834502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:12.836978 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:28:12.839004 systemd[1]: Startup finished in 2.774s (kernel) + 7.231s (initrd) + 4.162s (userspace) = 14.168s. Nov 24 00:28:12.856327 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:28:13.288848 kubelet[1683]: E1124 00:28:13.288706 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:28:13.293911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:28:13.294168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:28:13.294583 systemd[1]: kubelet.service: Consumed 990ms CPU time, 267.5M memory peak. Nov 24 00:28:14.836291 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:28:14.837651 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:53812.service - OpenSSH per-connection server daemon (10.0.0.1:53812). Nov 24 00:28:14.910511 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 53812 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:14.912056 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:14.917936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:28:14.918982 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:28:14.924957 systemd-logind[1553]: New session 1 of user core. Nov 24 00:28:14.940563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:28:14.943312 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:28:14.963198 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:28:14.965381 systemd-logind[1553]: New session c1 of user core. Nov 24 00:28:15.105608 systemd[1701]: Queued start job for default target default.target. Nov 24 00:28:15.122296 systemd[1701]: Created slice app.slice - User Application Slice. Nov 24 00:28:15.122319 systemd[1701]: Reached target paths.target - Paths. Nov 24 00:28:15.122354 systemd[1701]: Reached target timers.target - Timers. Nov 24 00:28:15.123729 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:28:15.134219 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:28:15.134357 systemd[1701]: Reached target sockets.target - Sockets. Nov 24 00:28:15.134403 systemd[1701]: Reached target basic.target - Basic System. Nov 24 00:28:15.134450 systemd[1701]: Reached target default.target - Main User Target. Nov 24 00:28:15.134487 systemd[1701]: Startup finished in 163ms. Nov 24 00:28:15.134688 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:28:15.136346 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:28:15.207275 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Nov 24 00:28:15.262922 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:15.264355 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:15.268663 systemd-logind[1553]: New session 2 of user core. Nov 24 00:28:15.278152 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:28:15.329944 sshd[1715]: Connection closed by 10.0.0.1 port 53814 Nov 24 00:28:15.330259 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Nov 24 00:28:15.338527 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:53814.service: Deactivated successfully. Nov 24 00:28:15.340199 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:28:15.340957 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:28:15.343230 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Nov 24 00:28:15.343755 systemd-logind[1553]: Removed session 2. Nov 24 00:28:15.401942 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:15.403125 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:15.407026 systemd-logind[1553]: New session 3 of user core. Nov 24 00:28:15.419131 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:28:15.466886 sshd[1724]: Connection closed by 10.0.0.1 port 53828 Nov 24 00:28:15.467230 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Nov 24 00:28:15.478188 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:53828.service: Deactivated successfully. Nov 24 00:28:15.479774 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:28:15.480531 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:28:15.482796 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:53840.service - OpenSSH per-connection server daemon (10.0.0.1:53840). Nov 24 00:28:15.483511 systemd-logind[1553]: Removed session 3. Nov 24 00:28:15.533340 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 53840 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:15.534494 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:15.538451 systemd-logind[1553]: New session 4 of user core. Nov 24 00:28:15.548130 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:28:15.599450 sshd[1733]: Connection closed by 10.0.0.1 port 53840 Nov 24 00:28:15.599738 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Nov 24 00:28:15.612459 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:53840.service: Deactivated successfully. Nov 24 00:28:15.614056 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:28:15.614813 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:28:15.617129 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:53854.service - OpenSSH per-connection server daemon (10.0.0.1:53854). Nov 24 00:28:15.617700 systemd-logind[1553]: Removed session 4. Nov 24 00:28:15.674382 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:15.675550 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:15.679199 systemd-logind[1553]: New session 5 of user core. Nov 24 00:28:15.691125 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:28:15.746720 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:28:15.747000 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:28:15.765217 sudo[1743]: pam_unix(sudo:session): session closed for user root Nov 24 00:28:15.766542 sshd[1742]: Connection closed by 10.0.0.1 port 53854 Nov 24 00:28:15.766860 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Nov 24 00:28:15.781292 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:53854.service: Deactivated successfully. Nov 24 00:28:15.782810 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:28:15.783540 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:28:15.786050 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:53868.service - OpenSSH per-connection server daemon (10.0.0.1:53868). Nov 24 00:28:15.786630 systemd-logind[1553]: Removed session 5. Nov 24 00:28:15.841030 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 53868 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:15.842305 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:15.846245 systemd-logind[1553]: New session 6 of user core. Nov 24 00:28:15.861124 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:28:15.912470 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:28:15.912752 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:28:15.918371 sudo[1754]: pam_unix(sudo:session): session closed for user root Nov 24 00:28:15.923694 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:28:15.924061 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:28:15.932668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:28:15.982803 augenrules[1776]: No rules Nov 24 00:28:15.984268 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:28:15.984527 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:28:15.985586 sudo[1753]: pam_unix(sudo:session): session closed for user root Nov 24 00:28:15.987031 sshd[1752]: Connection closed by 10.0.0.1 port 53868 Nov 24 00:28:15.987383 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Nov 24 00:28:15.997994 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:53868.service: Deactivated successfully. Nov 24 00:28:15.999596 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:28:16.000232 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:28:16.012511 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:53882.service - OpenSSH per-connection server daemon (10.0.0.1:53882). Nov 24 00:28:16.013074 systemd-logind[1553]: Removed session 6. Nov 24 00:28:16.072556 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 53882 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:28:16.073767 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:28:16.078088 systemd-logind[1553]: New session 7 of user core. Nov 24 00:28:16.088163 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:28:16.140305 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:28:16.140611 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:28:16.425981 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:28:16.442369 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:28:16.666806 dockerd[1809]: time="2025-11-24T00:28:16.666740210Z" level=info msg="Starting up" Nov 24 00:28:16.667534 dockerd[1809]: time="2025-11-24T00:28:16.667496479Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:28:16.679344 dockerd[1809]: time="2025-11-24T00:28:16.679237628Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:28:17.219376 dockerd[1809]: time="2025-11-24T00:28:17.219311218Z" level=info msg="Loading containers: start." Nov 24 00:28:17.230062 kernel: Initializing XFRM netlink socket Nov 24 00:28:17.499733 systemd-networkd[1468]: docker0: Link UP Nov 24 00:28:17.504994 dockerd[1809]: time="2025-11-24T00:28:17.504932620Z" level=info msg="Loading containers: done." Nov 24 00:28:17.520406 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4106348006-merged.mount: Deactivated successfully. Nov 24 00:28:17.521794 dockerd[1809]: time="2025-11-24T00:28:17.521728829Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:28:17.521888 dockerd[1809]: time="2025-11-24T00:28:17.521850007Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:28:17.521993 dockerd[1809]: time="2025-11-24T00:28:17.521965854Z" level=info msg="Initializing buildkit" Nov 24 00:28:17.551633 dockerd[1809]: time="2025-11-24T00:28:17.551584199Z" level=info msg="Completed buildkit initialization" Nov 24 00:28:17.557391 dockerd[1809]: time="2025-11-24T00:28:17.557332549Z" level=info msg="Daemon has completed initialization" Nov 24 00:28:17.557518 dockerd[1809]: time="2025-11-24T00:28:17.557445972Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:28:17.557624 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:28:18.270199 containerd[1583]: time="2025-11-24T00:28:18.270137623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:28:18.876266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200291785.mount: Deactivated successfully. Nov 24 00:28:20.045745 containerd[1583]: time="2025-11-24T00:28:20.045684387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:20.046454 containerd[1583]: time="2025-11-24T00:28:20.046388357Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113213" Nov 24 00:28:20.047607 containerd[1583]: time="2025-11-24T00:28:20.047552069Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:20.050022 containerd[1583]: time="2025-11-24T00:28:20.049969904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:20.050871 containerd[1583]: time="2025-11-24T00:28:20.050824066Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 1.780637751s" Nov 24 00:28:20.050871 containerd[1583]: time="2025-11-24T00:28:20.050871164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:28:20.051625 containerd[1583]: time="2025-11-24T00:28:20.051427818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:28:21.369502 containerd[1583]: time="2025-11-24T00:28:21.369448677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:21.370148 containerd[1583]: time="2025-11-24T00:28:21.370102474Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018107" Nov 24 00:28:21.371205 containerd[1583]: time="2025-11-24T00:28:21.371187609Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:21.373676 containerd[1583]: time="2025-11-24T00:28:21.373649886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:21.374461 containerd[1583]: time="2025-11-24T00:28:21.374418938Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 1.322962006s" Nov 24 00:28:21.374493 containerd[1583]: time="2025-11-24T00:28:21.374461538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:28:21.374930 containerd[1583]: time="2025-11-24T00:28:21.374897616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:28:22.772264 containerd[1583]: time="2025-11-24T00:28:22.772198584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:22.773067 containerd[1583]: time="2025-11-24T00:28:22.773041194Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156482" Nov 24 00:28:22.774402 containerd[1583]: time="2025-11-24T00:28:22.774363695Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:22.776770 containerd[1583]: time="2025-11-24T00:28:22.776717729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:22.777556 containerd[1583]: time="2025-11-24T00:28:22.777497211Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.402572484s" Nov 24 00:28:22.777556 containerd[1583]: time="2025-11-24T00:28:22.777539510Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:28:22.778262 containerd[1583]: time="2025-11-24T00:28:22.778109840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:28:23.510888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:28:23.512293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:28:23.715991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:23.729487 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:28:23.769075 kubelet[2106]: E1124 00:28:23.768918 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:28:23.776358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:28:23.776557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:28:23.776979 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.1M memory peak. Nov 24 00:28:23.824584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500784999.mount: Deactivated successfully. Nov 24 00:28:24.852131 containerd[1583]: time="2025-11-24T00:28:24.852045820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:24.852863 containerd[1583]: time="2025-11-24T00:28:24.852829129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929138" Nov 24 00:28:24.854168 containerd[1583]: time="2025-11-24T00:28:24.854133996Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:24.856084 containerd[1583]: time="2025-11-24T00:28:24.856052254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:24.856550 containerd[1583]: time="2025-11-24T00:28:24.856506976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 2.078371498s" Nov 24 00:28:24.856577 containerd[1583]: time="2025-11-24T00:28:24.856548695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:28:24.857147 containerd[1583]: time="2025-11-24T00:28:24.857099117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:28:25.410537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473011093.mount: Deactivated successfully. Nov 24 00:28:26.600192 containerd[1583]: time="2025-11-24T00:28:26.600123510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:26.600780 containerd[1583]: time="2025-11-24T00:28:26.600754774Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 24 00:28:26.602046 containerd[1583]: time="2025-11-24T00:28:26.601987646Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:26.604530 containerd[1583]: time="2025-11-24T00:28:26.604504536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:26.605400 containerd[1583]: time="2025-11-24T00:28:26.605356003Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.748217191s" Nov 24 00:28:26.605400 containerd[1583]: time="2025-11-24T00:28:26.605385598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:28:26.606091 containerd[1583]: time="2025-11-24T00:28:26.606057729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:28:27.086802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052345293.mount: Deactivated successfully. Nov 24 00:28:27.143865 containerd[1583]: time="2025-11-24T00:28:27.143791051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:28:27.144622 containerd[1583]: time="2025-11-24T00:28:27.144540977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:28:27.145607 containerd[1583]: time="2025-11-24T00:28:27.145574124Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:28:27.147419 containerd[1583]: time="2025-11-24T00:28:27.147395801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:28:27.147984 containerd[1583]: time="2025-11-24T00:28:27.147959869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 541.878395ms" Nov 24 00:28:27.148034 containerd[1583]: time="2025-11-24T00:28:27.147985426Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:28:27.148477 containerd[1583]: time="2025-11-24T00:28:27.148346253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:28:27.888553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606964781.mount: Deactivated successfully. Nov 24 00:28:30.676813 containerd[1583]: time="2025-11-24T00:28:30.676733469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:30.698491 containerd[1583]: time="2025-11-24T00:28:30.698448262Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Nov 24 00:28:30.744477 containerd[1583]: time="2025-11-24T00:28:30.744421638Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:30.804736 containerd[1583]: time="2025-11-24T00:28:30.804687608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:28:30.805900 containerd[1583]: time="2025-11-24T00:28:30.805853645Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.657481383s" Nov 24 00:28:30.805900 containerd[1583]: time="2025-11-24T00:28:30.805894432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:28:33.649357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:33.649561 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.1M memory peak. Nov 24 00:28:33.651648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:28:33.673639 systemd[1]: Reload requested from client PID 2263 ('systemctl') (unit session-7.scope)... Nov 24 00:28:33.673653 systemd[1]: Reloading... Nov 24 00:28:33.750048 zram_generator::config[2308]: No configuration found. Nov 24 00:28:34.175069 systemd[1]: Reloading finished in 501 ms. Nov 24 00:28:34.246705 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:28:34.246800 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:28:34.247112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:34.247172 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.4M memory peak. Nov 24 00:28:34.248649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:28:34.418704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:34.435325 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:28:34.469263 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:28:34.469263 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:28:34.469263 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:28:34.469776 kubelet[2353]: I1124 00:28:34.469281 2353 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:28:35.121917 kubelet[2353]: I1124 00:28:35.121439 2353 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:28:35.121917 kubelet[2353]: I1124 00:28:35.121477 2353 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:28:35.122076 kubelet[2353]: I1124 00:28:35.121958 2353 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:28:35.146706 kubelet[2353]: E1124 00:28:35.146656 2353 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:28:35.146836 kubelet[2353]: I1124 00:28:35.146732 2353 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:28:35.154648 kubelet[2353]: I1124 00:28:35.154608 2353 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:28:35.159887 kubelet[2353]: I1124 00:28:35.159866 2353 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:28:35.160135 kubelet[2353]: I1124 00:28:35.160097 2353 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:28:35.160274 kubelet[2353]: I1124 00:28:35.160126 2353 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:28:35.160274 kubelet[2353]: I1124 00:28:35.160271 2353 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:28:35.160401 kubelet[2353]: I1124 00:28:35.160280 2353 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:28:35.160447 kubelet[2353]: I1124 00:28:35.160421 2353 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:28:35.162403 kubelet[2353]: I1124 00:28:35.162381 2353 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:28:35.162403 kubelet[2353]: I1124 00:28:35.162400 2353 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:28:35.162467 kubelet[2353]: I1124 00:28:35.162425 2353 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:28:35.163939 kubelet[2353]: I1124 00:28:35.163826 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:28:35.169530 kubelet[2353]: E1124 00:28:35.169475 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:28:35.169657 kubelet[2353]: I1124 00:28:35.169590 2353 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:28:35.170261 kubelet[2353]: I1124 00:28:35.170194 2353 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:28:35.170652 kubelet[2353]: E1124 00:28:35.170620 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:28:35.171483 kubelet[2353]: W1124 00:28:35.171430 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:28:35.174478 kubelet[2353]: I1124 00:28:35.174452 2353 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:28:35.174558 kubelet[2353]: I1124 00:28:35.174544 2353 server.go:1289] "Started kubelet" Nov 24 00:28:35.176038 kubelet[2353]: I1124 00:28:35.175980 2353 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:28:35.178175 kubelet[2353]: I1124 00:28:35.177191 2353 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:28:35.178175 kubelet[2353]: I1124 00:28:35.177520 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:28:35.178175 kubelet[2353]: I1124 00:28:35.178105 2353 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:28:35.179696 kubelet[2353]: I1124 00:28:35.179675 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:28:35.180407 kubelet[2353]: I1124 00:28:35.180242 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:28:35.180710 kubelet[2353]: E1124 00:28:35.180685 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:35.181124 kubelet[2353]: E1124 00:28:35.181090 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Nov 24 00:28:35.182164 kubelet[2353]: E1124 00:28:35.181149 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac9d3b49dc0ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 00:28:35.174490348 +0000 UTC m=+0.735284121,LastTimestamp:2025-11-24 00:28:35.174490348 +0000 UTC m=+0.735284121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 00:28:35.182452 kubelet[2353]: E1124 00:28:35.182418 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:28:35.182452 kubelet[2353]: I1124 00:28:35.182436 2353 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:28:35.182538 kubelet[2353]: I1124 00:28:35.182514 2353 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:28:35.183032 kubelet[2353]: I1124 00:28:35.180175 2353 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:28:35.183151 kubelet[2353]: I1124 00:28:35.183136 2353 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:28:35.183202 kubelet[2353]: I1124 00:28:35.183188 2353 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:28:35.183601 kubelet[2353]: E1124 00:28:35.183584 2353 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:28:35.184163 kubelet[2353]: I1124 00:28:35.184149 2353 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:28:35.186955 kubelet[2353]: I1124 00:28:35.186914 2353 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:28:35.198458 kubelet[2353]: I1124 00:28:35.198433 2353 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:28:35.198458 kubelet[2353]: I1124 00:28:35.198449 2353 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:28:35.198458 kubelet[2353]: I1124 00:28:35.198465 2353 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:28:35.281318 kubelet[2353]: E1124 00:28:35.281275 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:35.381860 kubelet[2353]: E1124 00:28:35.381748 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:35.382139 kubelet[2353]: E1124 00:28:35.382111 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Nov 24 00:28:35.482431 kubelet[2353]: E1124 00:28:35.482382 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:35.581864 kubelet[2353]: I1124 00:28:35.581600 2353 policy_none.go:49] "None policy: Start" Nov 24 00:28:35.581864 kubelet[2353]: I1124 00:28:35.581645 2353 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:28:35.581864 kubelet[2353]: I1124 00:28:35.581661 2353 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:28:35.582641 kubelet[2353]: E1124 00:28:35.582605 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:35.582786 kubelet[2353]: I1124 00:28:35.582763 2353 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:28:35.582834 kubelet[2353]: I1124 00:28:35.582794 2353 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:28:35.582834 kubelet[2353]: I1124 00:28:35.582819 2353 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:28:35.582834 kubelet[2353]: I1124 00:28:35.582827 2353 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:28:35.582932 kubelet[2353]: E1124 00:28:35.582870 2353 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:28:35.583658 kubelet[2353]: E1124 00:28:35.583550 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:28:35.593499 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:28:35.609133 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:28:35.612277 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:28:35.623324 kubelet[2353]: E1124 00:28:35.622043 2353 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:28:35.623324 kubelet[2353]: I1124 00:28:35.622285 2353 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:28:35.623324 kubelet[2353]: I1124 00:28:35.622297 2353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:28:35.623324 kubelet[2353]: I1124 00:28:35.622537 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:28:35.623324 kubelet[2353]: E1124 00:28:35.623231 2353 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:28:35.623324 kubelet[2353]: E1124 00:28:35.623310 2353 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 24 00:28:35.724478 kubelet[2353]: I1124 00:28:35.724383 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:28:35.724785 kubelet[2353]: E1124 00:28:35.724704 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Nov 24 00:28:35.783365 kubelet[2353]: E1124 00:28:35.783318 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Nov 24 00:28:35.785591 kubelet[2353]: I1124 00:28:35.785554 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae893902b2bb183b4394fe9fc543e5ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae893902b2bb183b4394fe9fc543e5ba\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:35.785591 kubelet[2353]: I1124 00:28:35.785580 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae893902b2bb183b4394fe9fc543e5ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae893902b2bb183b4394fe9fc543e5ba\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:35.785693 kubelet[2353]: I1124 00:28:35.785597 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae893902b2bb183b4394fe9fc543e5ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae893902b2bb183b4394fe9fc543e5ba\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:35.926480 kubelet[2353]: I1124 00:28:35.926439 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:28:35.926779 kubelet[2353]: E1124 00:28:35.926744 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Nov 24 00:28:36.009189 kubelet[2353]: E1124 00:28:36.008975 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac9d3b49dc0ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 00:28:35.174490348 +0000 UTC m=+0.735284121,LastTimestamp:2025-11-24 00:28:35.174490348 +0000 UTC m=+0.735284121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 00:28:36.086883 kubelet[2353]: I1124 00:28:36.086842 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:36.086883 kubelet[2353]: I1124 00:28:36.086876 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:36.087049 kubelet[2353]: I1124 00:28:36.086899 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:36.087049 kubelet[2353]: I1124 00:28:36.086921 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:36.087049 kubelet[2353]: I1124 00:28:36.086959 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:36.087106 systemd[1]: Created slice kubepods-burstable-podae893902b2bb183b4394fe9fc543e5ba.slice - libcontainer container kubepods-burstable-podae893902b2bb183b4394fe9fc543e5ba.slice. Nov 24 00:28:36.107785 kubelet[2353]: E1124 00:28:36.107753 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:36.108065 kubelet[2353]: E1124 00:28:36.108047 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.108545 containerd[1583]: time="2025-11-24T00:28:36.108508215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae893902b2bb183b4394fe9fc543e5ba,Namespace:kube-system,Attempt:0,}" Nov 24 00:28:36.151618 systemd[1]: Created slice kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice - libcontainer container kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice. Nov 24 00:28:36.153393 kubelet[2353]: E1124 00:28:36.153356 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:36.165338 containerd[1583]: time="2025-11-24T00:28:36.165294500Z" level=info msg="connecting to shim 55c09fa43a737751ae29c60ab224506a30d819f1377be948dc59816ef1365ebf" address="unix:///run/containerd/s/8829cda80526dddc70b9bbfbd811216688df975887cbbf6b9cbb8d126685e5c4" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:28:36.171267 systemd[1]: Created slice kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice - libcontainer container kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice. Nov 24 00:28:36.176489 kubelet[2353]: E1124 00:28:36.176255 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:36.188175 kubelet[2353]: I1124 00:28:36.188121 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:36.194195 systemd[1]: Started cri-containerd-55c09fa43a737751ae29c60ab224506a30d819f1377be948dc59816ef1365ebf.scope - libcontainer container 55c09fa43a737751ae29c60ab224506a30d819f1377be948dc59816ef1365ebf. Nov 24 00:28:36.266040 containerd[1583]: time="2025-11-24T00:28:36.265903528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae893902b2bb183b4394fe9fc543e5ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"55c09fa43a737751ae29c60ab224506a30d819f1377be948dc59816ef1365ebf\"" Nov 24 00:28:36.267064 kubelet[2353]: E1124 00:28:36.267036 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.288367 containerd[1583]: time="2025-11-24T00:28:36.288331919Z" level=info msg="CreateContainer within sandbox \"55c09fa43a737751ae29c60ab224506a30d819f1377be948dc59816ef1365ebf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:28:36.301827 containerd[1583]: time="2025-11-24T00:28:36.301782694Z" level=info msg="Container 4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:28:36.309826 containerd[1583]: time="2025-11-24T00:28:36.309791393Z" level=info msg="CreateContainer within sandbox \"55c09fa43a737751ae29c60ab224506a30d819f1377be948dc59816ef1365ebf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b\"" Nov 24 00:28:36.310447 containerd[1583]: time="2025-11-24T00:28:36.310425693Z" level=info msg="StartContainer for \"4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b\"" Nov 24 00:28:36.312623 containerd[1583]: time="2025-11-24T00:28:36.312112065Z" level=info msg="connecting to shim 4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b" address="unix:///run/containerd/s/8829cda80526dddc70b9bbfbd811216688df975887cbbf6b9cbb8d126685e5c4" protocol=ttrpc version=3 Nov 24 00:28:36.328838 kubelet[2353]: I1124 00:28:36.328815 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:28:36.329293 kubelet[2353]: E1124 00:28:36.329261 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Nov 24 00:28:36.341165 systemd[1]: Started cri-containerd-4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b.scope - libcontainer container 4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b. Nov 24 00:28:36.387331 containerd[1583]: time="2025-11-24T00:28:36.387289790Z" level=info msg="StartContainer for \"4b4f1daf13f329f82b37f43a9a588742ef8c98253187aa50eff4c0624deed21b\" returns successfully" Nov 24 00:28:36.415620 kubelet[2353]: E1124 00:28:36.415564 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:28:36.431418 kubelet[2353]: E1124 00:28:36.431363 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:28:36.453761 kubelet[2353]: E1124 00:28:36.453719 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.454233 containerd[1583]: time="2025-11-24T00:28:36.454179893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,}" Nov 24 00:28:36.476186 containerd[1583]: time="2025-11-24T00:28:36.476135377Z" level=info msg="connecting to shim 2c7dfdec62b758b359b821e5fbd99075b88fcc36774b316b9dd300e5c3d58909" address="unix:///run/containerd/s/b069546086078f7f5fa215eea903d162ecca3faeb0cb183ce0ec241921a87c9e" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:28:36.477101 kubelet[2353]: E1124 00:28:36.477067 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.477510 containerd[1583]: time="2025-11-24T00:28:36.477488525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,}" Nov 24 00:28:36.497968 containerd[1583]: time="2025-11-24T00:28:36.497926203Z" level=info msg="connecting to shim 4f72fa5080931289e3fdbde92fe21928091f6a7bc7d545523adf87b5d39ce414" address="unix:///run/containerd/s/2c7c9536b70a62bdfec8aa5f2e359579e06f4ba075fb87ce02dc238fc25138c1" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:28:36.502343 systemd[1]: Started cri-containerd-2c7dfdec62b758b359b821e5fbd99075b88fcc36774b316b9dd300e5c3d58909.scope - libcontainer container 2c7dfdec62b758b359b821e5fbd99075b88fcc36774b316b9dd300e5c3d58909. Nov 24 00:28:36.526150 systemd[1]: Started cri-containerd-4f72fa5080931289e3fdbde92fe21928091f6a7bc7d545523adf87b5d39ce414.scope - libcontainer container 4f72fa5080931289e3fdbde92fe21928091f6a7bc7d545523adf87b5d39ce414. Nov 24 00:28:36.553958 containerd[1583]: time="2025-11-24T00:28:36.553879255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7dfdec62b758b359b821e5fbd99075b88fcc36774b316b9dd300e5c3d58909\"" Nov 24 00:28:36.554694 kubelet[2353]: E1124 00:28:36.554668 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.559030 containerd[1583]: time="2025-11-24T00:28:36.558984659Z" level=info msg="CreateContainer within sandbox \"2c7dfdec62b758b359b821e5fbd99075b88fcc36774b316b9dd300e5c3d58909\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:28:36.568482 containerd[1583]: time="2025-11-24T00:28:36.568439611Z" level=info msg="Container 283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:28:36.575782 containerd[1583]: time="2025-11-24T00:28:36.575731085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f72fa5080931289e3fdbde92fe21928091f6a7bc7d545523adf87b5d39ce414\"" Nov 24 00:28:36.576479 containerd[1583]: time="2025-11-24T00:28:36.576452257Z" level=info msg="CreateContainer within sandbox \"2c7dfdec62b758b359b821e5fbd99075b88fcc36774b316b9dd300e5c3d58909\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad\"" Nov 24 00:28:36.577191 kubelet[2353]: E1124 00:28:36.577127 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.578106 containerd[1583]: time="2025-11-24T00:28:36.577944235Z" level=info msg="StartContainer for \"283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad\"" Nov 24 00:28:36.579982 containerd[1583]: time="2025-11-24T00:28:36.579950418Z" level=info msg="connecting to shim 283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad" address="unix:///run/containerd/s/b069546086078f7f5fa215eea903d162ecca3faeb0cb183ce0ec241921a87c9e" protocol=ttrpc version=3 Nov 24 00:28:36.582021 containerd[1583]: time="2025-11-24T00:28:36.581984122Z" level=info msg="CreateContainer within sandbox \"4f72fa5080931289e3fdbde92fe21928091f6a7bc7d545523adf87b5d39ce414\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:28:36.592273 containerd[1583]: time="2025-11-24T00:28:36.592233954Z" level=info msg="Container 8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:28:36.592365 kubelet[2353]: E1124 00:28:36.592223 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:36.592527 kubelet[2353]: E1124 00:28:36.592509 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:36.600049 containerd[1583]: time="2025-11-24T00:28:36.599997433Z" level=info msg="CreateContainer within sandbox \"4f72fa5080931289e3fdbde92fe21928091f6a7bc7d545523adf87b5d39ce414\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165\"" Nov 24 00:28:36.600892 containerd[1583]: time="2025-11-24T00:28:36.600866262Z" level=info msg="StartContainer for \"8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165\"" Nov 24 00:28:36.602053 containerd[1583]: time="2025-11-24T00:28:36.602002663Z" level=info msg="connecting to shim 8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165" address="unix:///run/containerd/s/2c7c9536b70a62bdfec8aa5f2e359579e06f4ba075fb87ce02dc238fc25138c1" protocol=ttrpc version=3 Nov 24 00:28:36.606393 systemd[1]: Started cri-containerd-283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad.scope - libcontainer container 283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad. Nov 24 00:28:36.639118 systemd[1]: Started cri-containerd-8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165.scope - libcontainer container 8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165. Nov 24 00:28:36.664484 containerd[1583]: time="2025-11-24T00:28:36.664444905Z" level=info msg="StartContainer for \"283d702625b18cd343fd4bd7410ef4ee323f2fde1c5ea43790922708100f12ad\" returns successfully" Nov 24 00:28:36.697528 containerd[1583]: time="2025-11-24T00:28:36.697491439Z" level=info msg="StartContainer for \"8acbef66fb9a62752527d33af2879b4da31a6562815ab0327747ec4298177165\" returns successfully" Nov 24 00:28:37.131316 kubelet[2353]: I1124 00:28:37.131261 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:28:37.596776 kubelet[2353]: E1124 00:28:37.596656 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:37.597112 kubelet[2353]: E1124 00:28:37.596823 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:37.599373 kubelet[2353]: E1124 00:28:37.599343 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:37.599485 kubelet[2353]: E1124 00:28:37.599461 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:37.599751 kubelet[2353]: E1124 00:28:37.599726 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:37.599850 kubelet[2353]: E1124 00:28:37.599826 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:37.830208 kubelet[2353]: E1124 00:28:37.830165 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 24 00:28:37.926798 kubelet[2353]: I1124 00:28:37.926161 2353 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 00:28:37.926798 kubelet[2353]: E1124 00:28:37.926193 2353 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 24 00:28:37.938510 kubelet[2353]: E1124 00:28:37.938474 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.038864 kubelet[2353]: E1124 00:28:38.038804 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.139416 kubelet[2353]: E1124 00:28:38.139348 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.240173 kubelet[2353]: E1124 00:28:38.240135 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.340643 kubelet[2353]: E1124 00:28:38.340607 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.441129 kubelet[2353]: E1124 00:28:38.441097 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.541830 kubelet[2353]: E1124 00:28:38.541767 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.599885 kubelet[2353]: E1124 00:28:38.599864 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:38.600224 kubelet[2353]: E1124 00:28:38.599896 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:28:38.600224 kubelet[2353]: E1124 00:28:38.599964 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:38.600224 kubelet[2353]: E1124 00:28:38.599977 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:38.642210 kubelet[2353]: E1124 00:28:38.642175 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.743287 kubelet[2353]: E1124 00:28:38.743229 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.844080 kubelet[2353]: E1124 00:28:38.843945 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:28:38.883433 kubelet[2353]: I1124 00:28:38.883373 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:38.890610 kubelet[2353]: I1124 00:28:38.890577 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:38.895132 kubelet[2353]: I1124 00:28:38.895096 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:39.171936 kubelet[2353]: I1124 00:28:39.171493 2353 apiserver.go:52] "Watching apiserver" Nov 24 00:28:39.175823 kubelet[2353]: E1124 00:28:39.175784 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:39.184185 kubelet[2353]: I1124 00:28:39.184156 2353 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:28:39.601150 kubelet[2353]: E1124 00:28:39.601107 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:39.601150 kubelet[2353]: I1124 00:28:39.601132 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:39.606470 kubelet[2353]: E1124 00:28:39.606447 2353 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:39.606604 kubelet[2353]: E1124 00:28:39.606573 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:40.396949 systemd[1]: Reload requested from client PID 2636 ('systemctl') (unit session-7.scope)... Nov 24 00:28:40.396962 systemd[1]: Reloading... Nov 24 00:28:40.470074 zram_generator::config[2682]: No configuration found. Nov 24 00:28:40.601963 kubelet[2353]: E1124 00:28:40.601935 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:40.734201 systemd[1]: Reloading finished in 336 ms. Nov 24 00:28:40.760398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:28:40.776505 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:28:40.776773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:40.776813 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 132.6M memory peak. Nov 24 00:28:40.779276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:28:40.981598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:28:40.993416 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:28:41.030890 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:28:41.030890 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:28:41.030890 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:28:41.031311 kubelet[2724]: I1124 00:28:41.030926 2724 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:28:41.038694 kubelet[2724]: I1124 00:28:41.038644 2724 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:28:41.038694 kubelet[2724]: I1124 00:28:41.038674 2724 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:28:41.039454 kubelet[2724]: I1124 00:28:41.039421 2724 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:28:41.040537 kubelet[2724]: I1124 00:28:41.040514 2724 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:28:41.042368 kubelet[2724]: I1124 00:28:41.042341 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:28:41.045428 kubelet[2724]: I1124 00:28:41.045391 2724 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:28:41.049534 kubelet[2724]: I1124 00:28:41.049519 2724 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:28:41.049739 kubelet[2724]: I1124 00:28:41.049700 2724 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:28:41.049900 kubelet[2724]: I1124 00:28:41.049738 2724 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:28:41.049974 kubelet[2724]: I1124 00:28:41.049902 2724 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:28:41.049974 kubelet[2724]: I1124 00:28:41.049910 2724 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:28:41.049974 kubelet[2724]: I1124 00:28:41.049949 2724 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:28:41.050118 kubelet[2724]: I1124 00:28:41.050102 2724 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:28:41.050154 kubelet[2724]: I1124 00:28:41.050127 2724 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:28:41.050154 kubelet[2724]: I1124 00:28:41.050150 2724 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:28:41.050211 kubelet[2724]: I1124 00:28:41.050158 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:28:41.051440 kubelet[2724]: I1124 00:28:41.051397 2724 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:28:41.053026 kubelet[2724]: I1124 00:28:41.052986 2724 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:28:41.056544 kubelet[2724]: I1124 00:28:41.055877 2724 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:28:41.056544 kubelet[2724]: I1124 00:28:41.055921 2724 server.go:1289] "Started kubelet" Nov 24 00:28:41.057778 kubelet[2724]: I1124 00:28:41.057746 2724 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:28:41.059669 kubelet[2724]: I1124 00:28:41.057712 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:28:41.060185 kubelet[2724]: I1124 00:28:41.060102 2724 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:28:41.060294 kubelet[2724]: I1124 00:28:41.060272 2724 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:28:41.061749 kubelet[2724]: I1124 00:28:41.061733 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:28:41.063352 kubelet[2724]: I1124 00:28:41.063324 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:28:41.066360 kubelet[2724]: I1124 00:28:41.066342 2724 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:28:41.066555 kubelet[2724]: I1124 00:28:41.066540 2724 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:28:41.066736 kubelet[2724]: I1124 00:28:41.066722 2724 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:28:41.067533 kubelet[2724]: I1124 00:28:41.067505 2724 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:28:41.069881 kubelet[2724]: I1124 00:28:41.069221 2724 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:28:41.069881 kubelet[2724]: I1124 00:28:41.069256 2724 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:28:41.069979 kubelet[2724]: E1124 00:28:41.069894 2724 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:28:41.070146 kubelet[2724]: I1124 00:28:41.070123 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:28:41.077610 kubelet[2724]: I1124 00:28:41.077577 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:28:41.077610 kubelet[2724]: I1124 00:28:41.077600 2724 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:28:41.077738 kubelet[2724]: I1124 00:28:41.077617 2724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:28:41.077738 kubelet[2724]: I1124 00:28:41.077627 2724 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:28:41.077738 kubelet[2724]: E1124 00:28:41.077666 2724 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:28:41.107514 kubelet[2724]: I1124 00:28:41.107483 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:28:41.107514 kubelet[2724]: I1124 00:28:41.107501 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:28:41.107514 kubelet[2724]: I1124 00:28:41.107518 2724 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:28:41.107681 kubelet[2724]: I1124 00:28:41.107628 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:28:41.107681 kubelet[2724]: I1124 00:28:41.107637 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:28:41.107681 kubelet[2724]: I1124 00:28:41.107652 2724 policy_none.go:49] "None policy: Start" Nov 24 00:28:41.107681 kubelet[2724]: I1124 00:28:41.107661 2724 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:28:41.107681 kubelet[2724]: I1124 00:28:41.107670 2724 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:28:41.107783 kubelet[2724]: I1124 00:28:41.107745 2724 state_mem.go:75] "Updated machine memory state" Nov 24 00:28:41.111429 kubelet[2724]: E1124 00:28:41.111352 2724 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:28:41.111589 kubelet[2724]: I1124 00:28:41.111555 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:28:41.111621 kubelet[2724]: I1124 00:28:41.111577 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:28:41.111744 kubelet[2724]: I1124 00:28:41.111731 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:28:41.113160 kubelet[2724]: E1124 00:28:41.113136 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:28:41.178764 kubelet[2724]: I1124 00:28:41.178730 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:41.178924 kubelet[2724]: I1124 00:28:41.178898 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:41.178996 kubelet[2724]: I1124 00:28:41.178757 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.183573 kubelet[2724]: E1124 00:28:41.183542 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:41.183994 kubelet[2724]: E1124 00:28:41.183974 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:41.184117 kubelet[2724]: E1124 00:28:41.184092 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.218262 kubelet[2724]: I1124 00:28:41.218229 2724 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:28:41.225135 kubelet[2724]: I1124 00:28:41.225098 2724 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 24 00:28:41.225258 kubelet[2724]: I1124 00:28:41.225189 2724 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 00:28:41.268038 kubelet[2724]: I1124 00:28:41.267899 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:41.268038 kubelet[2724]: I1124 00:28:41.267934 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae893902b2bb183b4394fe9fc543e5ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae893902b2bb183b4394fe9fc543e5ba\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:41.268038 kubelet[2724]: I1124 00:28:41.267952 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae893902b2bb183b4394fe9fc543e5ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae893902b2bb183b4394fe9fc543e5ba\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:41.268038 kubelet[2724]: I1124 00:28:41.267965 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae893902b2bb183b4394fe9fc543e5ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae893902b2bb183b4394fe9fc543e5ba\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:41.268038 kubelet[2724]: I1124 00:28:41.268002 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.268242 kubelet[2724]: I1124 00:28:41.268044 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.268242 kubelet[2724]: I1124 00:28:41.268068 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.268242 kubelet[2724]: I1124 00:28:41.268111 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.268242 kubelet[2724]: I1124 00:28:41.268150 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:28:41.367956 sudo[2764]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 24 00:28:41.368289 sudo[2764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 24 00:28:41.484028 kubelet[2724]: E1124 00:28:41.483983 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:41.484392 kubelet[2724]: E1124 00:28:41.484236 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:41.484392 kubelet[2724]: E1124 00:28:41.484310 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:41.661394 sudo[2764]: pam_unix(sudo:session): session closed for user root Nov 24 00:28:42.051547 kubelet[2724]: I1124 00:28:42.051501 2724 apiserver.go:52] "Watching apiserver" Nov 24 00:28:42.067809 kubelet[2724]: I1124 00:28:42.067761 2724 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:28:42.090414 kubelet[2724]: I1124 00:28:42.090383 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:42.091039 kubelet[2724]: I1124 00:28:42.090808 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:42.091039 kubelet[2724]: E1124 00:28:42.090856 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:42.098027 kubelet[2724]: E1124 00:28:42.095868 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 24 00:28:42.098027 kubelet[2724]: E1124 00:28:42.096064 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:42.098027 kubelet[2724]: E1124 00:28:42.096388 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 24 00:28:42.098027 kubelet[2724]: E1124 00:28:42.096495 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:42.116744 kubelet[2724]: I1124 00:28:42.116674 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.116655676 podStartE2EDuration="4.116655676s" podCreationTimestamp="2025-11-24 00:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:28:42.10768315 +0000 UTC m=+1.109937921" watchObservedRunningTime="2025-11-24 00:28:42.116655676 +0000 UTC m=+1.118910447" Nov 24 00:28:42.127404 kubelet[2724]: I1124 00:28:42.127341 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.127323703 podStartE2EDuration="4.127323703s" podCreationTimestamp="2025-11-24 00:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:28:42.127225008 +0000 UTC m=+1.129479799" watchObservedRunningTime="2025-11-24 00:28:42.127323703 +0000 UTC m=+1.129578474" Nov 24 00:28:42.127606 kubelet[2724]: I1124 00:28:42.127433 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.127429681 podStartE2EDuration="4.127429681s" podCreationTimestamp="2025-11-24 00:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:28:42.117534344 +0000 UTC m=+1.119789115" watchObservedRunningTime="2025-11-24 00:28:42.127429681 +0000 UTC m=+1.129684452" Nov 24 00:28:42.964174 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 24 00:28:42.966059 sshd[1788]: Connection closed by 10.0.0.1 port 53882 Nov 24 00:28:42.966077 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 24 00:28:42.969659 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:53882.service: Deactivated successfully. Nov 24 00:28:42.971647 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:28:42.971865 systemd[1]: session-7.scope: Consumed 4.373s CPU time, 259.8M memory peak. Nov 24 00:28:42.972971 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:28:42.974136 systemd-logind[1553]: Removed session 7. Nov 24 00:28:43.092294 kubelet[2724]: E1124 00:28:43.092244 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:43.092710 kubelet[2724]: E1124 00:28:43.092308 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:44.094048 kubelet[2724]: E1124 00:28:44.093803 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:46.131530 kubelet[2724]: E1124 00:28:46.131446 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:46.751505 kubelet[2724]: I1124 00:28:46.751359 2724 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:28:46.751687 containerd[1583]: time="2025-11-24T00:28:46.751651457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:28:46.752081 kubelet[2724]: I1124 00:28:46.751801 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:28:47.097233 kubelet[2724]: E1124 00:28:47.097119 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:47.764994 systemd[1]: Created slice kubepods-besteffort-pod492fc8d0_265b_473c_9b29_edca01dbe652.slice - libcontainer container kubepods-besteffort-pod492fc8d0_265b_473c_9b29_edca01dbe652.slice. Nov 24 00:28:47.781498 systemd[1]: Created slice kubepods-burstable-pod205d858a_752d_4a19_9c52_ab5937f304bb.slice - libcontainer container kubepods-burstable-pod205d858a_752d_4a19_9c52_ab5937f304bb.slice. Nov 24 00:28:47.813177 kubelet[2724]: I1124 00:28:47.813131 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-kernel\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813177 kubelet[2724]: I1124 00:28:47.813185 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-hubble-tls\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813634 kubelet[2724]: I1124 00:28:47.813281 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-hostproc\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813634 kubelet[2724]: I1124 00:28:47.813319 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-cgroup\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813634 kubelet[2724]: I1124 00:28:47.813359 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-net\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813634 kubelet[2724]: I1124 00:28:47.813402 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-run\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813634 kubelet[2724]: I1124 00:28:47.813436 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-xtables-lock\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813634 kubelet[2724]: I1124 00:28:47.813475 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/492fc8d0-265b-473c-9b29-edca01dbe652-kube-proxy\") pod \"kube-proxy-rhkxx\" (UID: \"492fc8d0-265b-473c-9b29-edca01dbe652\") " pod="kube-system/kube-proxy-rhkxx" Nov 24 00:28:47.813776 kubelet[2724]: I1124 00:28:47.813518 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/492fc8d0-265b-473c-9b29-edca01dbe652-xtables-lock\") pod \"kube-proxy-rhkxx\" (UID: \"492fc8d0-265b-473c-9b29-edca01dbe652\") " pod="kube-system/kube-proxy-rhkxx" Nov 24 00:28:47.813776 kubelet[2724]: I1124 00:28:47.813554 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/492fc8d0-265b-473c-9b29-edca01dbe652-lib-modules\") pod \"kube-proxy-rhkxx\" (UID: \"492fc8d0-265b-473c-9b29-edca01dbe652\") " pod="kube-system/kube-proxy-rhkxx" Nov 24 00:28:47.813776 kubelet[2724]: I1124 00:28:47.813593 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zvr\" (UniqueName: \"kubernetes.io/projected/492fc8d0-265b-473c-9b29-edca01dbe652-kube-api-access-f8zvr\") pod \"kube-proxy-rhkxx\" (UID: \"492fc8d0-265b-473c-9b29-edca01dbe652\") " pod="kube-system/kube-proxy-rhkxx" Nov 24 00:28:47.813776 kubelet[2724]: I1124 00:28:47.813632 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cni-path\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813776 kubelet[2724]: I1124 00:28:47.813667 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-lib-modules\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813893 kubelet[2724]: I1124 00:28:47.813705 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-config-path\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813893 kubelet[2724]: I1124 00:28:47.813742 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvktb\" (UniqueName: \"kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-kube-api-access-cvktb\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813893 kubelet[2724]: I1124 00:28:47.813790 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-bpf-maps\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813893 kubelet[2724]: I1124 00:28:47.813830 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-etc-cni-netd\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.813893 kubelet[2724]: I1124 00:28:47.813864 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/205d858a-752d-4a19-9c52-ab5937f304bb-clustermesh-secrets\") pod \"cilium-29mm9\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " pod="kube-system/cilium-29mm9" Nov 24 00:28:47.970917 systemd[1]: Created slice kubepods-besteffort-pod48452b04_e529_483e_9fab_f06eda679727.slice - libcontainer container kubepods-besteffort-pod48452b04_e529_483e_9fab_f06eda679727.slice. Nov 24 00:28:47.974697 kubelet[2724]: E1124 00:28:47.974591 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.015589 kubelet[2724]: I1124 00:28:48.015433 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48452b04-e529-483e-9fab-f06eda679727-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rhhjl\" (UID: \"48452b04-e529-483e-9fab-f06eda679727\") " pod="kube-system/cilium-operator-6c4d7847fc-rhhjl" Nov 24 00:28:48.015589 kubelet[2724]: I1124 00:28:48.015469 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57q8b\" (UniqueName: \"kubernetes.io/projected/48452b04-e529-483e-9fab-f06eda679727-kube-api-access-57q8b\") pod \"cilium-operator-6c4d7847fc-rhhjl\" (UID: \"48452b04-e529-483e-9fab-f06eda679727\") " pod="kube-system/cilium-operator-6c4d7847fc-rhhjl" Nov 24 00:28:48.076268 kubelet[2724]: E1124 00:28:48.076224 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.076834 containerd[1583]: time="2025-11-24T00:28:48.076792935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhkxx,Uid:492fc8d0-265b-473c-9b29-edca01dbe652,Namespace:kube-system,Attempt:0,}" Nov 24 00:28:48.086837 kubelet[2724]: E1124 00:28:48.086797 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.087140 containerd[1583]: time="2025-11-24T00:28:48.087105448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29mm9,Uid:205d858a-752d-4a19-9c52-ab5937f304bb,Namespace:kube-system,Attempt:0,}" Nov 24 00:28:48.099751 kubelet[2724]: E1124 00:28:48.098961 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.099751 kubelet[2724]: E1124 00:28:48.098989 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.121295 containerd[1583]: time="2025-11-24T00:28:48.121148252Z" level=info msg="connecting to shim 839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509" address="unix:///run/containerd/s/505955ddaff5b3bb579c156af807385bb92e4e66b2bfaba8915f1de3907b33ad" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:28:48.130791 containerd[1583]: time="2025-11-24T00:28:48.129164839Z" level=info msg="connecting to shim 1b4f2fbbb2d816f2329277d7fd0134395370d6c07c4acb0c676a5bb79e644951" address="unix:///run/containerd/s/8b602a80c4e1a89fe140a70299699d21c532b9922d1921017b0cb782bee2b20f" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:28:48.180155 systemd[1]: Started cri-containerd-839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509.scope - libcontainer container 839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509. Nov 24 00:28:48.184508 systemd[1]: Started cri-containerd-1b4f2fbbb2d816f2329277d7fd0134395370d6c07c4acb0c676a5bb79e644951.scope - libcontainer container 1b4f2fbbb2d816f2329277d7fd0134395370d6c07c4acb0c676a5bb79e644951. Nov 24 00:28:48.212270 containerd[1583]: time="2025-11-24T00:28:48.212225180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29mm9,Uid:205d858a-752d-4a19-9c52-ab5937f304bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\"" Nov 24 00:28:48.212990 kubelet[2724]: E1124 00:28:48.212966 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.213979 containerd[1583]: time="2025-11-24T00:28:48.213939305Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 24 00:28:48.214245 containerd[1583]: time="2025-11-24T00:28:48.213964423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhkxx,Uid:492fc8d0-265b-473c-9b29-edca01dbe652,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b4f2fbbb2d816f2329277d7fd0134395370d6c07c4acb0c676a5bb79e644951\"" Nov 24 00:28:48.214804 kubelet[2724]: E1124 00:28:48.214774 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.219935 containerd[1583]: time="2025-11-24T00:28:48.219884043Z" level=info msg="CreateContainer within sandbox \"1b4f2fbbb2d816f2329277d7fd0134395370d6c07c4acb0c676a5bb79e644951\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:28:48.229990 containerd[1583]: time="2025-11-24T00:28:48.229946559Z" level=info msg="Container b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:28:48.238190 containerd[1583]: time="2025-11-24T00:28:48.238155644Z" level=info msg="CreateContainer within sandbox \"1b4f2fbbb2d816f2329277d7fd0134395370d6c07c4acb0c676a5bb79e644951\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc\"" Nov 24 00:28:48.238726 containerd[1583]: time="2025-11-24T00:28:48.238614251Z" level=info msg="StartContainer for \"b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc\"" Nov 24 00:28:48.240311 containerd[1583]: time="2025-11-24T00:28:48.240275614Z" level=info msg="connecting to shim b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc" address="unix:///run/containerd/s/8b602a80c4e1a89fe140a70299699d21c532b9922d1921017b0cb782bee2b20f" protocol=ttrpc version=3 Nov 24 00:28:48.265163 systemd[1]: Started cri-containerd-b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc.scope - libcontainer container b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc. Nov 24 00:28:48.275252 kubelet[2724]: E1124 00:28:48.275098 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:48.276704 containerd[1583]: time="2025-11-24T00:28:48.276649844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rhhjl,Uid:48452b04-e529-483e-9fab-f06eda679727,Namespace:kube-system,Attempt:0,}" Nov 24 00:28:48.300714 containerd[1583]: time="2025-11-24T00:28:48.300645341Z" level=info msg="connecting to shim ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012" address="unix:///run/containerd/s/93ef859e799688a9d9054672650e223ac0aeee744b9a23c4e2adde46727adc38" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:28:48.327130 systemd[1]: Started cri-containerd-ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012.scope - libcontainer container ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012. Nov 24 00:28:48.597278 containerd[1583]: time="2025-11-24T00:28:48.596808850Z" level=info msg="StartContainer for \"b63aee11a22f63a5fd4dc2c4bc75b32b34b2d3933df683138bf4a9454ec8becc\" returns successfully" Nov 24 00:28:48.604792 containerd[1583]: time="2025-11-24T00:28:48.604744753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rhhjl,Uid:48452b04-e529-483e-9fab-f06eda679727,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\"" Nov 24 00:28:48.606640 kubelet[2724]: E1124 00:28:48.606613 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:49.104279 kubelet[2724]: E1124 00:28:49.104246 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:49.112652 kubelet[2724]: I1124 00:28:49.112563 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhkxx" podStartSLOduration=2.1125463939999998 podStartE2EDuration="2.112546394s" podCreationTimestamp="2025-11-24 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:28:49.112499325 +0000 UTC m=+8.114754086" watchObservedRunningTime="2025-11-24 00:28:49.112546394 +0000 UTC m=+8.114801165" Nov 24 00:28:50.964940 kubelet[2724]: E1124 00:28:50.964911 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:51.108081 kubelet[2724]: E1124 00:28:51.108030 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:28:56.670216 update_engine[1557]: I20251124 00:28:56.670136 1557 update_attempter.cc:509] Updating boot flags... Nov 24 00:28:58.072456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381397191.mount: Deactivated successfully. Nov 24 00:29:02.711213 containerd[1583]: time="2025-11-24T00:29:02.711150205Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:29:02.711847 containerd[1583]: time="2025-11-24T00:29:02.711799492Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 24 00:29:02.712970 containerd[1583]: time="2025-11-24T00:29:02.712920560Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:29:02.714260 containerd[1583]: time="2025-11-24T00:29:02.714221659Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.500237507s" Nov 24 00:29:02.714260 containerd[1583]: time="2025-11-24T00:29:02.714253338Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 24 00:29:02.715180 containerd[1583]: time="2025-11-24T00:29:02.715136558Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 24 00:29:02.721270 containerd[1583]: time="2025-11-24T00:29:02.721230042Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 24 00:29:02.729610 containerd[1583]: time="2025-11-24T00:29:02.729577927Z" level=info msg="Container c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:02.739562 containerd[1583]: time="2025-11-24T00:29:02.739526927Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\"" Nov 24 00:29:02.740068 containerd[1583]: time="2025-11-24T00:29:02.740043704Z" level=info msg="StartContainer for \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\"" Nov 24 00:29:02.740834 containerd[1583]: time="2025-11-24T00:29:02.740812817Z" level=info msg="connecting to shim c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4" address="unix:///run/containerd/s/505955ddaff5b3bb579c156af807385bb92e4e66b2bfaba8915f1de3907b33ad" protocol=ttrpc version=3 Nov 24 00:29:02.764135 systemd[1]: Started cri-containerd-c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4.scope - libcontainer container c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4. Nov 24 00:29:02.796095 containerd[1583]: time="2025-11-24T00:29:02.796061939Z" level=info msg="StartContainer for \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\" returns successfully" Nov 24 00:29:02.808375 systemd[1]: cri-containerd-c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4.scope: Deactivated successfully. Nov 24 00:29:02.809965 containerd[1583]: time="2025-11-24T00:29:02.809924885Z" level=info msg="received container exit event container_id:\"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\" id:\"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\" pid:3170 exited_at:{seconds:1763944142 nanos:809518016}" Nov 24 00:29:02.831483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4-rootfs.mount: Deactivated successfully. Nov 24 00:29:03.768410 kubelet[2724]: E1124 00:29:03.768369 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:04.771454 kubelet[2724]: E1124 00:29:04.771406 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:04.776109 containerd[1583]: time="2025-11-24T00:29:04.776060356Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 24 00:29:04.794753 containerd[1583]: time="2025-11-24T00:29:04.794700408Z" level=info msg="Container f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:04.805060 containerd[1583]: time="2025-11-24T00:29:04.804982560Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\"" Nov 24 00:29:04.805627 containerd[1583]: time="2025-11-24T00:29:04.805583805Z" level=info msg="StartContainer for \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\"" Nov 24 00:29:04.807950 containerd[1583]: time="2025-11-24T00:29:04.807902432Z" level=info msg="connecting to shim f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be" address="unix:///run/containerd/s/505955ddaff5b3bb579c156af807385bb92e4e66b2bfaba8915f1de3907b33ad" protocol=ttrpc version=3 Nov 24 00:29:04.831265 systemd[1]: Started cri-containerd-f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be.scope - libcontainer container f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be. Nov 24 00:29:04.932788 containerd[1583]: time="2025-11-24T00:29:04.932746148Z" level=info msg="StartContainer for \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\" returns successfully" Nov 24 00:29:04.940354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:29:04.940610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:29:04.940942 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:29:04.942504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:29:04.944084 systemd[1]: cri-containerd-f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be.scope: Deactivated successfully. Nov 24 00:29:04.946415 containerd[1583]: time="2025-11-24T00:29:04.946358928Z" level=info msg="received container exit event container_id:\"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\" id:\"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\" pid:3220 exited_at:{seconds:1763944144 nanos:945862772}" Nov 24 00:29:04.965844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:29:05.775205 kubelet[2724]: E1124 00:29:05.775173 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:05.792146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626665432.mount: Deactivated successfully. Nov 24 00:29:05.792292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be-rootfs.mount: Deactivated successfully. Nov 24 00:29:05.960398 containerd[1583]: time="2025-11-24T00:29:05.960344026Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 24 00:29:06.434808 containerd[1583]: time="2025-11-24T00:29:06.434737037Z" level=info msg="Container 27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:06.437261 containerd[1583]: time="2025-11-24T00:29:06.437216294Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:29:06.438642 containerd[1583]: time="2025-11-24T00:29:06.438595856Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 24 00:29:06.439851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752203733.mount: Deactivated successfully. Nov 24 00:29:06.443629 containerd[1583]: time="2025-11-24T00:29:06.443576812Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:29:06.444824 containerd[1583]: time="2025-11-24T00:29:06.444799028Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.729631543s" Nov 24 00:29:06.444869 containerd[1583]: time="2025-11-24T00:29:06.444826831Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 24 00:29:06.445906 containerd[1583]: time="2025-11-24T00:29:06.445876681Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\"" Nov 24 00:29:06.446316 containerd[1583]: time="2025-11-24T00:29:06.446294178Z" level=info msg="StartContainer for \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\"" Nov 24 00:29:06.447670 containerd[1583]: time="2025-11-24T00:29:06.447644536Z" level=info msg="connecting to shim 27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1" address="unix:///run/containerd/s/505955ddaff5b3bb579c156af807385bb92e4e66b2bfaba8915f1de3907b33ad" protocol=ttrpc version=3 Nov 24 00:29:06.452597 containerd[1583]: time="2025-11-24T00:29:06.452529741Z" level=info msg="CreateContainer within sandbox \"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 24 00:29:06.467570 containerd[1583]: time="2025-11-24T00:29:06.467530989Z" level=info msg="Container 38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:06.469183 systemd[1]: Started cri-containerd-27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1.scope - libcontainer container 27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1. Nov 24 00:29:06.476941 containerd[1583]: time="2025-11-24T00:29:06.476902729Z" level=info msg="CreateContainer within sandbox \"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\"" Nov 24 00:29:06.477727 containerd[1583]: time="2025-11-24T00:29:06.477581289Z" level=info msg="StartContainer for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\"" Nov 24 00:29:06.478678 containerd[1583]: time="2025-11-24T00:29:06.478574713Z" level=info msg="connecting to shim 38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda" address="unix:///run/containerd/s/93ef859e799688a9d9054672650e223ac0aeee744b9a23c4e2adde46727adc38" protocol=ttrpc version=3 Nov 24 00:29:06.500304 systemd[1]: Started cri-containerd-38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda.scope - libcontainer container 38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda. Nov 24 00:29:06.548383 containerd[1583]: time="2025-11-24T00:29:06.548344006Z" level=info msg="StartContainer for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" returns successfully" Nov 24 00:29:06.555045 systemd[1]: cri-containerd-27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1.scope: Deactivated successfully. Nov 24 00:29:06.747966 containerd[1583]: time="2025-11-24T00:29:06.747906362Z" level=info msg="received container exit event container_id:\"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\" id:\"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\" pid:3279 exited_at:{seconds:1763944146 nanos:556366780}" Nov 24 00:29:06.750080 containerd[1583]: time="2025-11-24T00:29:06.750046629Z" level=info msg="StartContainer for \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\" returns successfully" Nov 24 00:29:06.778634 kubelet[2724]: E1124 00:29:06.778566 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:06.781661 kubelet[2724]: E1124 00:29:06.781629 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:06.792894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1-rootfs.mount: Deactivated successfully. Nov 24 00:29:07.785174 kubelet[2724]: E1124 00:29:07.784129 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:07.786448 kubelet[2724]: E1124 00:29:07.786368 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:08.303623 kubelet[2724]: I1124 00:29:08.303546 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rhhjl" podStartSLOduration=3.464367862 podStartE2EDuration="21.303529324s" podCreationTimestamp="2025-11-24 00:28:47 +0000 UTC" firstStartedPulling="2025-11-24 00:28:48.607201587 +0000 UTC m=+7.609456358" lastFinishedPulling="2025-11-24 00:29:06.446363048 +0000 UTC m=+25.448617820" observedRunningTime="2025-11-24 00:29:07.497148905 +0000 UTC m=+26.499403676" watchObservedRunningTime="2025-11-24 00:29:08.303529324 +0000 UTC m=+27.305784095" Nov 24 00:29:08.788343 kubelet[2724]: E1124 00:29:08.788313 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:08.879581 containerd[1583]: time="2025-11-24T00:29:08.879530144Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 24 00:29:08.966761 containerd[1583]: time="2025-11-24T00:29:08.966078083Z" level=info msg="Container 5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:08.975192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727911317.mount: Deactivated successfully. Nov 24 00:29:08.984676 containerd[1583]: time="2025-11-24T00:29:08.984623513Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\"" Nov 24 00:29:08.985722 containerd[1583]: time="2025-11-24T00:29:08.985512248Z" level=info msg="StartContainer for \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\"" Nov 24 00:29:08.986958 containerd[1583]: time="2025-11-24T00:29:08.986903952Z" level=info msg="connecting to shim 5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a" address="unix:///run/containerd/s/505955ddaff5b3bb579c156af807385bb92e4e66b2bfaba8915f1de3907b33ad" protocol=ttrpc version=3 Nov 24 00:29:09.003244 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:52954.service - OpenSSH per-connection server daemon (10.0.0.1:52954). Nov 24 00:29:09.015462 systemd[1]: Started cri-containerd-5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a.scope - libcontainer container 5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a. Nov 24 00:29:09.099534 systemd[1]: cri-containerd-5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a.scope: Deactivated successfully. Nov 24 00:29:09.101536 containerd[1583]: time="2025-11-24T00:29:09.100475010Z" level=info msg="received container exit event container_id:\"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\" id:\"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\" pid:3354 exited_at:{seconds:1763944149 nanos:100206665}" Nov 24 00:29:09.105122 containerd[1583]: time="2025-11-24T00:29:09.105086290Z" level=info msg="StartContainer for \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\" returns successfully" Nov 24 00:29:09.108425 sshd[3350]: Accepted publickey for core from 10.0.0.1 port 52954 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:09.110945 sshd-session[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:09.118451 systemd-logind[1553]: New session 8 of user core. Nov 24 00:29:09.124354 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:29:09.137509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a-rootfs.mount: Deactivated successfully. Nov 24 00:29:09.296206 sshd[3379]: Connection closed by 10.0.0.1 port 52954 Nov 24 00:29:09.297185 sshd-session[3350]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:09.302221 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:52954.service: Deactivated successfully. Nov 24 00:29:09.304722 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:29:09.307184 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:29:09.308552 systemd-logind[1553]: Removed session 8. Nov 24 00:29:09.794268 kubelet[2724]: E1124 00:29:09.794233 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:09.801871 containerd[1583]: time="2025-11-24T00:29:09.801816222Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 24 00:29:09.816728 containerd[1583]: time="2025-11-24T00:29:09.816159691Z" level=info msg="Container ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:09.824337 containerd[1583]: time="2025-11-24T00:29:09.824269524Z" level=info msg="CreateContainer within sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\"" Nov 24 00:29:09.824905 containerd[1583]: time="2025-11-24T00:29:09.824883210Z" level=info msg="StartContainer for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\"" Nov 24 00:29:09.825898 containerd[1583]: time="2025-11-24T00:29:09.825866704Z" level=info msg="connecting to shim ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4" address="unix:///run/containerd/s/505955ddaff5b3bb579c156af807385bb92e4e66b2bfaba8915f1de3907b33ad" protocol=ttrpc version=3 Nov 24 00:29:09.849154 systemd[1]: Started cri-containerd-ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4.scope - libcontainer container ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4. Nov 24 00:29:09.897631 containerd[1583]: time="2025-11-24T00:29:09.897586487Z" level=info msg="StartContainer for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" returns successfully" Nov 24 00:29:09.968834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776728208.mount: Deactivated successfully. Nov 24 00:29:10.126684 kubelet[2724]: I1124 00:29:10.126425 2724 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:29:10.335655 systemd[1]: Created slice kubepods-burstable-pod8ab5ee62_6c7f_4aac_b17b_91a53e16ffdb.slice - libcontainer container kubepods-burstable-pod8ab5ee62_6c7f_4aac_b17b_91a53e16ffdb.slice. Nov 24 00:29:10.342642 systemd[1]: Created slice kubepods-burstable-pod12adb2f8_4820_4998_80a5_24fbd726b671.slice - libcontainer container kubepods-burstable-pod12adb2f8_4820_4998_80a5_24fbd726b671.slice. Nov 24 00:29:10.495332 kubelet[2724]: I1124 00:29:10.495276 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm2d\" (UniqueName: \"kubernetes.io/projected/8ab5ee62-6c7f-4aac-b17b-91a53e16ffdb-kube-api-access-qwm2d\") pod \"coredns-674b8bbfcf-9wgt6\" (UID: \"8ab5ee62-6c7f-4aac-b17b-91a53e16ffdb\") " pod="kube-system/coredns-674b8bbfcf-9wgt6" Nov 24 00:29:10.495332 kubelet[2724]: I1124 00:29:10.495322 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12adb2f8-4820-4998-80a5-24fbd726b671-config-volume\") pod \"coredns-674b8bbfcf-mlqjs\" (UID: \"12adb2f8-4820-4998-80a5-24fbd726b671\") " pod="kube-system/coredns-674b8bbfcf-mlqjs" Nov 24 00:29:10.495590 kubelet[2724]: I1124 00:29:10.495356 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ab5ee62-6c7f-4aac-b17b-91a53e16ffdb-config-volume\") pod \"coredns-674b8bbfcf-9wgt6\" (UID: \"8ab5ee62-6c7f-4aac-b17b-91a53e16ffdb\") " pod="kube-system/coredns-674b8bbfcf-9wgt6" Nov 24 00:29:10.495590 kubelet[2724]: I1124 00:29:10.495377 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhwl\" (UniqueName: \"kubernetes.io/projected/12adb2f8-4820-4998-80a5-24fbd726b671-kube-api-access-wfhwl\") pod \"coredns-674b8bbfcf-mlqjs\" (UID: \"12adb2f8-4820-4998-80a5-24fbd726b671\") " pod="kube-system/coredns-674b8bbfcf-mlqjs" Nov 24 00:29:10.640845 kubelet[2724]: E1124 00:29:10.640793 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:10.641573 containerd[1583]: time="2025-11-24T00:29:10.641527421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9wgt6,Uid:8ab5ee62-6c7f-4aac-b17b-91a53e16ffdb,Namespace:kube-system,Attempt:0,}" Nov 24 00:29:10.647259 kubelet[2724]: E1124 00:29:10.647212 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:10.647760 containerd[1583]: time="2025-11-24T00:29:10.647722041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlqjs,Uid:12adb2f8-4820-4998-80a5-24fbd726b671,Namespace:kube-system,Attempt:0,}" Nov 24 00:29:10.799711 kubelet[2724]: E1124 00:29:10.799614 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:10.812830 kubelet[2724]: I1124 00:29:10.812755 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-29mm9" podStartSLOduration=9.311453338 podStartE2EDuration="23.812739385s" podCreationTimestamp="2025-11-24 00:28:47 +0000 UTC" firstStartedPulling="2025-11-24 00:28:48.213610567 +0000 UTC m=+7.215865338" lastFinishedPulling="2025-11-24 00:29:02.714896614 +0000 UTC m=+21.717151385" observedRunningTime="2025-11-24 00:29:10.81203605 +0000 UTC m=+29.814290841" watchObservedRunningTime="2025-11-24 00:29:10.812739385 +0000 UTC m=+29.814994156" Nov 24 00:29:12.088561 kubelet[2724]: E1124 00:29:12.088525 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:12.431477 systemd-networkd[1468]: cilium_host: Link UP Nov 24 00:29:12.431655 systemd-networkd[1468]: cilium_net: Link UP Nov 24 00:29:12.431825 systemd-networkd[1468]: cilium_net: Gained carrier Nov 24 00:29:12.431988 systemd-networkd[1468]: cilium_host: Gained carrier Nov 24 00:29:12.432349 systemd-networkd[1468]: cilium_net: Gained IPv6LL Nov 24 00:29:12.541765 systemd-networkd[1468]: cilium_vxlan: Link UP Nov 24 00:29:12.541773 systemd-networkd[1468]: cilium_vxlan: Gained carrier Nov 24 00:29:12.744034 kernel: NET: Registered PF_ALG protocol family Nov 24 00:29:12.857187 systemd-networkd[1468]: cilium_host: Gained IPv6LL Nov 24 00:29:13.379716 systemd-networkd[1468]: lxc_health: Link UP Nov 24 00:29:13.393327 systemd-networkd[1468]: lxc_health: Gained carrier Nov 24 00:29:13.683053 kernel: eth0: renamed from tmp60ffe Nov 24 00:29:13.682632 systemd-networkd[1468]: lxc3955479f1e88: Link UP Nov 24 00:29:13.683622 systemd-networkd[1468]: lxc3955479f1e88: Gained carrier Nov 24 00:29:13.710896 systemd-networkd[1468]: lxc13e0514c9e47: Link UP Nov 24 00:29:13.711128 kernel: eth0: renamed from tmp8dec6 Nov 24 00:29:13.713615 systemd-networkd[1468]: lxc13e0514c9e47: Gained carrier Nov 24 00:29:13.801365 systemd-networkd[1468]: cilium_vxlan: Gained IPv6LL Nov 24 00:29:14.089541 kubelet[2724]: E1124 00:29:14.089487 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:14.308843 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:59530.service - OpenSSH per-connection server daemon (10.0.0.1:59530). Nov 24 00:29:14.375401 sshd[3909]: Accepted publickey for core from 10.0.0.1 port 59530 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:14.376771 sshd-session[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:14.381296 systemd-logind[1553]: New session 9 of user core. Nov 24 00:29:14.396142 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:29:14.507336 sshd[3913]: Connection closed by 10.0.0.1 port 59530 Nov 24 00:29:14.507651 sshd-session[3909]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:14.511638 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:59530.service: Deactivated successfully. Nov 24 00:29:14.513644 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:29:14.514411 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:29:14.515545 systemd-logind[1553]: Removed session 9. Nov 24 00:29:14.825171 systemd-networkd[1468]: lxc_health: Gained IPv6LL Nov 24 00:29:14.889225 systemd-networkd[1468]: lxc3955479f1e88: Gained IPv6LL Nov 24 00:29:15.657234 systemd-networkd[1468]: lxc13e0514c9e47: Gained IPv6LL Nov 24 00:29:17.772480 containerd[1583]: time="2025-11-24T00:29:17.772429924Z" level=info msg="connecting to shim 8dec6fa4470a57644c9b00196bd64475bd47a926c01f909602ed9fe270f8e03f" address="unix:///run/containerd/s/21fe8b4d69d5c0afd6b8635972642b84910f0107b798e2148f89aa9907d472d9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:29:17.776134 containerd[1583]: time="2025-11-24T00:29:17.776088085Z" level=info msg="connecting to shim 60ffe554ff4cf64989c9f4578998b2c3bda663b650cfcef032c2cb9505a6cc91" address="unix:///run/containerd/s/f035eeeb59141a332d6b68d9cd9d17593c529f58c9ef04c0985fe842c0560a56" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:29:17.812149 systemd[1]: Started cri-containerd-60ffe554ff4cf64989c9f4578998b2c3bda663b650cfcef032c2cb9505a6cc91.scope - libcontainer container 60ffe554ff4cf64989c9f4578998b2c3bda663b650cfcef032c2cb9505a6cc91. Nov 24 00:29:17.813480 systemd[1]: Started cri-containerd-8dec6fa4470a57644c9b00196bd64475bd47a926c01f909602ed9fe270f8e03f.scope - libcontainer container 8dec6fa4470a57644c9b00196bd64475bd47a926c01f909602ed9fe270f8e03f. Nov 24 00:29:17.827400 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:29:17.829084 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:29:17.866206 containerd[1583]: time="2025-11-24T00:29:17.866051647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlqjs,Uid:12adb2f8-4820-4998-80a5-24fbd726b671,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dec6fa4470a57644c9b00196bd64475bd47a926c01f909602ed9fe270f8e03f\"" Nov 24 00:29:17.866787 kubelet[2724]: E1124 00:29:17.866746 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:17.874596 containerd[1583]: time="2025-11-24T00:29:17.874088182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9wgt6,Uid:8ab5ee62-6c7f-4aac-b17b-91a53e16ffdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"60ffe554ff4cf64989c9f4578998b2c3bda663b650cfcef032c2cb9505a6cc91\"" Nov 24 00:29:17.875096 containerd[1583]: time="2025-11-24T00:29:17.875071161Z" level=info msg="CreateContainer within sandbox \"8dec6fa4470a57644c9b00196bd64475bd47a926c01f909602ed9fe270f8e03f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:29:17.876696 kubelet[2724]: E1124 00:29:17.876669 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:17.882435 containerd[1583]: time="2025-11-24T00:29:17.882388153Z" level=info msg="CreateContainer within sandbox \"60ffe554ff4cf64989c9f4578998b2c3bda663b650cfcef032c2cb9505a6cc91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:29:17.893449 containerd[1583]: time="2025-11-24T00:29:17.893329441Z" level=info msg="Container e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:17.900555 containerd[1583]: time="2025-11-24T00:29:17.900512981Z" level=info msg="CreateContainer within sandbox \"60ffe554ff4cf64989c9f4578998b2c3bda663b650cfcef032c2cb9505a6cc91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740\"" Nov 24 00:29:17.901045 containerd[1583]: time="2025-11-24T00:29:17.901025355Z" level=info msg="StartContainer for \"e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740\"" Nov 24 00:29:17.904146 containerd[1583]: time="2025-11-24T00:29:17.904105569Z" level=info msg="connecting to shim e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740" address="unix:///run/containerd/s/f035eeeb59141a332d6b68d9cd9d17593c529f58c9ef04c0985fe842c0560a56" protocol=ttrpc version=3 Nov 24 00:29:17.940173 systemd[1]: Started cri-containerd-e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740.scope - libcontainer container e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740. Nov 24 00:29:17.948393 kubelet[2724]: I1124 00:29:17.948335 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:29:17.950806 kubelet[2724]: E1124 00:29:17.950737 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:17.950917 containerd[1583]: time="2025-11-24T00:29:17.950872327Z" level=info msg="Container 59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:29:17.963637 containerd[1583]: time="2025-11-24T00:29:17.963363219Z" level=info msg="CreateContainer within sandbox \"8dec6fa4470a57644c9b00196bd64475bd47a926c01f909602ed9fe270f8e03f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642\"" Nov 24 00:29:17.963845 containerd[1583]: time="2025-11-24T00:29:17.963817094Z" level=info msg="StartContainer for \"59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642\"" Nov 24 00:29:17.964698 containerd[1583]: time="2025-11-24T00:29:17.964669397Z" level=info msg="connecting to shim 59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642" address="unix:///run/containerd/s/21fe8b4d69d5c0afd6b8635972642b84910f0107b798e2148f89aa9907d472d9" protocol=ttrpc version=3 Nov 24 00:29:17.987457 systemd[1]: Started cri-containerd-59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642.scope - libcontainer container 59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642. Nov 24 00:29:17.993823 containerd[1583]: time="2025-11-24T00:29:17.993738710Z" level=info msg="StartContainer for \"e7a854bdd11a2f77ec13a19640581653319a99778b4b7ff74bee7ab084d37740\" returns successfully" Nov 24 00:29:18.042371 containerd[1583]: time="2025-11-24T00:29:18.041781664Z" level=info msg="StartContainer for \"59d8fb71c78dc7ef1736fc2ca0f9e47af8084a6bfeb05c7f3f666aa141883642\" returns successfully" Nov 24 00:29:18.765588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560909826.mount: Deactivated successfully. Nov 24 00:29:18.819033 kubelet[2724]: E1124 00:29:18.818955 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:18.821637 kubelet[2724]: E1124 00:29:18.821604 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:18.821728 kubelet[2724]: E1124 00:29:18.821664 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:18.840867 kubelet[2724]: I1124 00:29:18.840801 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9wgt6" podStartSLOduration=31.840783198 podStartE2EDuration="31.840783198s" podCreationTimestamp="2025-11-24 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:29:18.829710065 +0000 UTC m=+37.831964836" watchObservedRunningTime="2025-11-24 00:29:18.840783198 +0000 UTC m=+37.843037979" Nov 24 00:29:18.850718 kubelet[2724]: I1124 00:29:18.850634 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mlqjs" podStartSLOduration=31.850623672 podStartE2EDuration="31.850623672s" podCreationTimestamp="2025-11-24 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:29:18.849808479 +0000 UTC m=+37.852063250" watchObservedRunningTime="2025-11-24 00:29:18.850623672 +0000 UTC m=+37.852878443" Nov 24 00:29:19.531980 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:59536.service - OpenSSH per-connection server daemon (10.0.0.1:59536). Nov 24 00:29:19.594895 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 59536 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:19.597042 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:19.601488 systemd-logind[1553]: New session 10 of user core. Nov 24 00:29:19.620143 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:29:19.823506 kubelet[2724]: E1124 00:29:19.823399 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:19.824046 kubelet[2724]: E1124 00:29:19.823585 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:19.994069 sshd[4114]: Connection closed by 10.0.0.1 port 59536 Nov 24 00:29:19.994434 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:19.997793 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:59536.service: Deactivated successfully. Nov 24 00:29:19.999652 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:29:20.002183 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:29:20.003135 systemd-logind[1553]: Removed session 10. Nov 24 00:29:20.825335 kubelet[2724]: E1124 00:29:20.825305 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:20.825763 kubelet[2724]: E1124 00:29:20.825425 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:25.011711 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:36060.service - OpenSSH per-connection server daemon (10.0.0.1:36060). Nov 24 00:29:25.068733 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 36060 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:25.070519 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:25.075032 systemd-logind[1553]: New session 11 of user core. Nov 24 00:29:25.080318 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:29:25.194859 sshd[4134]: Connection closed by 10.0.0.1 port 36060 Nov 24 00:29:25.195222 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:25.206900 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:36060.service: Deactivated successfully. Nov 24 00:29:25.208825 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:29:25.209586 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:29:25.212287 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). Nov 24 00:29:25.212927 systemd-logind[1553]: Removed session 11. Nov 24 00:29:25.268333 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:25.269882 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:25.274297 systemd-logind[1553]: New session 12 of user core. Nov 24 00:29:25.286142 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:29:25.428093 sshd[4152]: Connection closed by 10.0.0.1 port 36066 Nov 24 00:29:25.428628 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:25.438930 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:36066.service: Deactivated successfully. Nov 24 00:29:25.441796 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:29:25.442576 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:29:25.445508 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:36082.service - OpenSSH per-connection server daemon (10.0.0.1:36082). Nov 24 00:29:25.446231 systemd-logind[1553]: Removed session 12. Nov 24 00:29:25.500279 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 36082 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:25.502045 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:25.506243 systemd-logind[1553]: New session 13 of user core. Nov 24 00:29:25.517148 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:29:25.627816 sshd[4167]: Connection closed by 10.0.0.1 port 36082 Nov 24 00:29:25.628078 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:25.632596 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:36082.service: Deactivated successfully. Nov 24 00:29:25.634623 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:29:25.635410 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:29:25.636812 systemd-logind[1553]: Removed session 13. Nov 24 00:29:30.643585 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:60186.service - OpenSSH per-connection server daemon (10.0.0.1:60186). Nov 24 00:29:30.703520 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 60186 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:30.705160 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:30.709506 systemd-logind[1553]: New session 14 of user core. Nov 24 00:29:30.725266 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:29:30.832429 sshd[4184]: Connection closed by 10.0.0.1 port 60186 Nov 24 00:29:30.832792 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:30.837275 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:60186.service: Deactivated successfully. Nov 24 00:29:30.839263 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:29:30.840155 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:29:30.841389 systemd-logind[1553]: Removed session 14. Nov 24 00:29:35.844318 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:60190.service - OpenSSH per-connection server daemon (10.0.0.1:60190). Nov 24 00:29:35.897483 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 60190 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:35.898676 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:35.902990 systemd-logind[1553]: New session 15 of user core. Nov 24 00:29:35.913155 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:29:36.146622 sshd[4201]: Connection closed by 10.0.0.1 port 60190 Nov 24 00:29:36.146921 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:36.151222 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:60190.service: Deactivated successfully. Nov 24 00:29:36.153273 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:29:36.153954 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:29:36.155064 systemd-logind[1553]: Removed session 15. Nov 24 00:29:41.162741 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:53250.service - OpenSSH per-connection server daemon (10.0.0.1:53250). Nov 24 00:29:41.227330 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 53250 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:41.228715 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:41.232863 systemd-logind[1553]: New session 16 of user core. Nov 24 00:29:41.242152 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:29:41.346904 sshd[4220]: Connection closed by 10.0.0.1 port 53250 Nov 24 00:29:41.347288 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:41.360473 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:53250.service: Deactivated successfully. Nov 24 00:29:41.362168 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:29:41.363006 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:29:41.365409 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:53266.service - OpenSSH per-connection server daemon (10.0.0.1:53266). Nov 24 00:29:41.366080 systemd-logind[1553]: Removed session 16. Nov 24 00:29:41.423519 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 53266 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:41.424667 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:41.428664 systemd-logind[1553]: New session 17 of user core. Nov 24 00:29:41.439131 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:29:41.673947 sshd[4236]: Connection closed by 10.0.0.1 port 53266 Nov 24 00:29:41.674340 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:41.689710 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:53266.service: Deactivated successfully. Nov 24 00:29:41.691703 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:29:41.692509 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:29:41.695069 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:53282.service - OpenSSH per-connection server daemon (10.0.0.1:53282). Nov 24 00:29:41.695683 systemd-logind[1553]: Removed session 17. Nov 24 00:29:41.748088 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 53282 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:41.749318 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:41.753582 systemd-logind[1553]: New session 18 of user core. Nov 24 00:29:41.764133 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:29:42.904063 sshd[4250]: Connection closed by 10.0.0.1 port 53282 Nov 24 00:29:42.904438 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:42.916583 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:53282.service: Deactivated successfully. Nov 24 00:29:42.919641 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:29:42.921049 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:29:42.924628 systemd-logind[1553]: Removed session 18. Nov 24 00:29:42.926720 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:53288.service - OpenSSH per-connection server daemon (10.0.0.1:53288). Nov 24 00:29:42.979499 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 53288 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:42.981064 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:42.985649 systemd-logind[1553]: New session 19 of user core. Nov 24 00:29:42.998135 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:29:43.224888 sshd[4275]: Connection closed by 10.0.0.1 port 53288 Nov 24 00:29:43.225371 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:43.236979 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:53288.service: Deactivated successfully. Nov 24 00:29:43.238946 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:29:43.239810 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:29:43.242699 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:53290.service - OpenSSH per-connection server daemon (10.0.0.1:53290). Nov 24 00:29:43.243360 systemd-logind[1553]: Removed session 19. Nov 24 00:29:43.297977 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 53290 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:43.299827 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:43.304710 systemd-logind[1553]: New session 20 of user core. Nov 24 00:29:43.312171 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:29:43.422942 sshd[4289]: Connection closed by 10.0.0.1 port 53290 Nov 24 00:29:43.423313 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:43.427341 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:53290.service: Deactivated successfully. Nov 24 00:29:43.429483 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:29:43.430498 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:29:43.431676 systemd-logind[1553]: Removed session 20. Nov 24 00:29:48.438749 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:53306.service - OpenSSH per-connection server daemon (10.0.0.1:53306). Nov 24 00:29:48.499081 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 53306 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:48.500974 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:48.505297 systemd-logind[1553]: New session 21 of user core. Nov 24 00:29:48.515186 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:29:48.625055 sshd[4305]: Connection closed by 10.0.0.1 port 53306 Nov 24 00:29:48.625404 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:48.630212 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:53306.service: Deactivated successfully. Nov 24 00:29:48.632222 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:29:48.633084 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:29:48.634736 systemd-logind[1553]: Removed session 21. Nov 24 00:29:53.649914 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:51986.service - OpenSSH per-connection server daemon (10.0.0.1:51986). Nov 24 00:29:53.694630 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 51986 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:53.696303 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:53.700445 systemd-logind[1553]: New session 22 of user core. Nov 24 00:29:53.712142 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:29:53.816759 sshd[4326]: Connection closed by 10.0.0.1 port 51986 Nov 24 00:29:53.817159 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:53.821743 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:51986.service: Deactivated successfully. Nov 24 00:29:53.823868 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:29:53.824589 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:29:53.825743 systemd-logind[1553]: Removed session 22. Nov 24 00:29:57.078424 kubelet[2724]: E1124 00:29:57.078387 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:57.078925 kubelet[2724]: E1124 00:29:57.078529 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:58.828396 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:51988.service - OpenSSH per-connection server daemon (10.0.0.1:51988). Nov 24 00:29:58.875626 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 51988 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:58.877538 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:58.882291 systemd-logind[1553]: New session 23 of user core. Nov 24 00:29:58.897311 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:29:59.000026 sshd[4342]: Connection closed by 10.0.0.1 port 51988 Nov 24 00:29:59.000336 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Nov 24 00:29:59.010651 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:51988.service: Deactivated successfully. Nov 24 00:29:59.012599 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:29:59.013308 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:29:59.015999 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:51996.service - OpenSSH per-connection server daemon (10.0.0.1:51996). Nov 24 00:29:59.016916 systemd-logind[1553]: Removed session 23. Nov 24 00:29:59.068128 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 51996 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:29:59.069351 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:29:59.073469 systemd-logind[1553]: New session 24 of user core. Nov 24 00:29:59.079033 kubelet[2724]: E1124 00:29:59.078957 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:29:59.079166 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:30:00.461994 containerd[1583]: time="2025-11-24T00:30:00.461940201Z" level=info msg="StopContainer for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" with timeout 30 (s)" Nov 24 00:30:00.472408 containerd[1583]: time="2025-11-24T00:30:00.472322359Z" level=info msg="Stop container \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" with signal terminated" Nov 24 00:30:00.486060 systemd[1]: cri-containerd-38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda.scope: Deactivated successfully. Nov 24 00:30:00.486950 containerd[1583]: time="2025-11-24T00:30:00.486915802Z" level=info msg="received container exit event container_id:\"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" id:\"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" pid:3300 exited_at:{seconds:1763944200 nanos:486634972}" Nov 24 00:30:00.497152 containerd[1583]: time="2025-11-24T00:30:00.497111152Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:30:00.498125 containerd[1583]: time="2025-11-24T00:30:00.498101733Z" level=info msg="StopContainer for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" with timeout 2 (s)" Nov 24 00:30:00.498469 containerd[1583]: time="2025-11-24T00:30:00.498440824Z" level=info msg="Stop container \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" with signal terminated" Nov 24 00:30:00.506899 systemd-networkd[1468]: lxc_health: Link DOWN Nov 24 00:30:00.506912 systemd-networkd[1468]: lxc_health: Lost carrier Nov 24 00:30:00.517994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda-rootfs.mount: Deactivated successfully. Nov 24 00:30:00.531444 systemd[1]: cri-containerd-ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4.scope: Deactivated successfully. Nov 24 00:30:00.531621 containerd[1583]: time="2025-11-24T00:30:00.531511513Z" level=info msg="StopContainer for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" returns successfully" Nov 24 00:30:00.533268 systemd[1]: cri-containerd-ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4.scope: Consumed 6.506s CPU time, 128.9M memory peak, 248K read from disk, 13.3M written to disk. Nov 24 00:30:00.533385 containerd[1583]: time="2025-11-24T00:30:00.533321337Z" level=info msg="received container exit event container_id:\"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" id:\"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" pid:3408 exited_at:{seconds:1763944200 nanos:532533515}" Nov 24 00:30:00.534512 containerd[1583]: time="2025-11-24T00:30:00.534487175Z" level=info msg="StopPodSandbox for \"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\"" Nov 24 00:30:00.543229 containerd[1583]: time="2025-11-24T00:30:00.543184710Z" level=info msg="Container to stop \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:30:00.550359 systemd[1]: cri-containerd-ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012.scope: Deactivated successfully. Nov 24 00:30:00.552553 containerd[1583]: time="2025-11-24T00:30:00.552519017Z" level=info msg="received sandbox exit event container_id:\"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" id:\"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" exit_status:137 exited_at:{seconds:1763944200 nanos:552248518}" monitor_name=podsandbox Nov 24 00:30:00.558381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4-rootfs.mount: Deactivated successfully. Nov 24 00:30:00.565482 containerd[1583]: time="2025-11-24T00:30:00.565416164Z" level=info msg="StopContainer for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" returns successfully" Nov 24 00:30:00.566242 containerd[1583]: time="2025-11-24T00:30:00.566048648Z" level=info msg="StopPodSandbox for \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\"" Nov 24 00:30:00.566242 containerd[1583]: time="2025-11-24T00:30:00.566103653Z" level=info msg="Container to stop \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:30:00.566242 containerd[1583]: time="2025-11-24T00:30:00.566113683Z" level=info msg="Container to stop \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:30:00.566242 containerd[1583]: time="2025-11-24T00:30:00.566121327Z" level=info msg="Container to stop \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:30:00.566242 containerd[1583]: time="2025-11-24T00:30:00.566129352Z" level=info msg="Container to stop \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:30:00.566242 containerd[1583]: time="2025-11-24T00:30:00.566136917Z" level=info msg="Container to stop \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:30:00.574131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012-rootfs.mount: Deactivated successfully. Nov 24 00:30:00.575633 systemd[1]: cri-containerd-839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509.scope: Deactivated successfully. Nov 24 00:30:00.576477 containerd[1583]: time="2025-11-24T00:30:00.576450434Z" level=info msg="received sandbox exit event container_id:\"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" id:\"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" exit_status:137 exited_at:{seconds:1763944200 nanos:576267904}" monitor_name=podsandbox Nov 24 00:30:00.580665 containerd[1583]: time="2025-11-24T00:30:00.580631079Z" level=info msg="shim disconnected" id=ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012 namespace=k8s.io Nov 24 00:30:00.580729 containerd[1583]: time="2025-11-24T00:30:00.580665835Z" level=warning msg="cleaning up after shim disconnected" id=ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012 namespace=k8s.io Nov 24 00:30:00.580758 containerd[1583]: time="2025-11-24T00:30:00.580676276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 24 00:30:00.597838 containerd[1583]: time="2025-11-24T00:30:00.597789404Z" level=info msg="TearDown network for sandbox \"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" successfully" Nov 24 00:30:00.598661 containerd[1583]: time="2025-11-24T00:30:00.598451465Z" level=info msg="StopPodSandbox for \"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" returns successfully" Nov 24 00:30:00.600154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012-shm.mount: Deactivated successfully. Nov 24 00:30:00.602958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509-rootfs.mount: Deactivated successfully. Nov 24 00:30:00.604821 containerd[1583]: time="2025-11-24T00:30:00.604787186Z" level=info msg="shim disconnected" id=839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509 namespace=k8s.io Nov 24 00:30:00.604821 containerd[1583]: time="2025-11-24T00:30:00.604818317Z" level=warning msg="cleaning up after shim disconnected" id=839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509 namespace=k8s.io Nov 24 00:30:00.604941 containerd[1583]: time="2025-11-24T00:30:00.604826462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 24 00:30:00.616142 containerd[1583]: time="2025-11-24T00:30:00.616087968Z" level=info msg="received sandbox container exit event sandbox_id:\"ae42107010f3a59e378de2123abd659c7f70f758de5ac5f4558f023c9d0ea012\" exit_status:137 exited_at:{seconds:1763944200 nanos:552248518}" monitor_name=criService Nov 24 00:30:00.619867 containerd[1583]: time="2025-11-24T00:30:00.619817357Z" level=info msg="received sandbox container exit event sandbox_id:\"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" exit_status:137 exited_at:{seconds:1763944200 nanos:576267904}" monitor_name=criService Nov 24 00:30:00.619995 containerd[1583]: time="2025-11-24T00:30:00.619962897Z" level=info msg="TearDown network for sandbox \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" successfully" Nov 24 00:30:00.619995 containerd[1583]: time="2025-11-24T00:30:00.619985400Z" level=info msg="StopPodSandbox for \"839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509\" returns successfully" Nov 24 00:30:00.666112 kubelet[2724]: I1124 00:30:00.666078 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57q8b\" (UniqueName: \"kubernetes.io/projected/48452b04-e529-483e-9fab-f06eda679727-kube-api-access-57q8b\") pod \"48452b04-e529-483e-9fab-f06eda679727\" (UID: \"48452b04-e529-483e-9fab-f06eda679727\") " Nov 24 00:30:00.669557 kubelet[2724]: I1124 00:30:00.669512 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48452b04-e529-483e-9fab-f06eda679727-kube-api-access-57q8b" (OuterVolumeSpecName: "kube-api-access-57q8b") pod "48452b04-e529-483e-9fab-f06eda679727" (UID: "48452b04-e529-483e-9fab-f06eda679727"). InnerVolumeSpecName "kube-api-access-57q8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:30:00.767235 kubelet[2724]: I1124 00:30:00.767203 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-hubble-tls\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767297 kubelet[2724]: I1124 00:30:00.767235 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-xtables-lock\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767297 kubelet[2724]: I1124 00:30:00.767258 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/205d858a-752d-4a19-9c52-ab5937f304bb-clustermesh-secrets\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767297 kubelet[2724]: I1124 00:30:00.767281 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvktb\" (UniqueName: \"kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-kube-api-access-cvktb\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767379 kubelet[2724]: I1124 00:30:00.767303 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-config-path\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767379 kubelet[2724]: I1124 00:30:00.767320 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-net\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767379 kubelet[2724]: I1124 00:30:00.767338 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-bpf-maps\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767379 kubelet[2724]: I1124 00:30:00.767359 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48452b04-e529-483e-9fab-f06eda679727-cilium-config-path\") pod \"48452b04-e529-483e-9fab-f06eda679727\" (UID: \"48452b04-e529-483e-9fab-f06eda679727\") " Nov 24 00:30:00.767470 kubelet[2724]: I1124 00:30:00.767381 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-kernel\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767470 kubelet[2724]: I1124 00:30:00.767398 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-run\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767470 kubelet[2724]: I1124 00:30:00.767417 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-etc-cni-netd\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767470 kubelet[2724]: I1124 00:30:00.767436 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-hostproc\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767470 kubelet[2724]: I1124 00:30:00.767453 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-lib-modules\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767581 kubelet[2724]: I1124 00:30:00.767471 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-cgroup\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767581 kubelet[2724]: I1124 00:30:00.767488 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cni-path\") pod \"205d858a-752d-4a19-9c52-ab5937f304bb\" (UID: \"205d858a-752d-4a19-9c52-ab5937f304bb\") " Nov 24 00:30:00.767581 kubelet[2724]: I1124 00:30:00.767524 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57q8b\" (UniqueName: \"kubernetes.io/projected/48452b04-e529-483e-9fab-f06eda679727-kube-api-access-57q8b\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.767581 kubelet[2724]: I1124 00:30:00.767556 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cni-path" (OuterVolumeSpecName: "cni-path") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770171 kubelet[2724]: I1124 00:30:00.770134 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-kube-api-access-cvktb" (OuterVolumeSpecName: "kube-api-access-cvktb") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "kube-api-access-cvktb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:30:00.770224 kubelet[2724]: I1124 00:30:00.770182 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770224 kubelet[2724]: I1124 00:30:00.770202 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770224 kubelet[2724]: I1124 00:30:00.770219 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770965 kubelet[2724]: I1124 00:30:00.770589 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770965 kubelet[2724]: I1124 00:30:00.770689 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-hostproc" (OuterVolumeSpecName: "hostproc") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770965 kubelet[2724]: I1124 00:30:00.770742 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770965 kubelet[2724]: I1124 00:30:00.770766 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.770965 kubelet[2724]: I1124 00:30:00.770774 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.771206 kubelet[2724]: I1124 00:30:00.770876 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205d858a-752d-4a19-9c52-ab5937f304bb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:30:00.771206 kubelet[2724]: I1124 00:30:00.770919 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:30:00.771466 kubelet[2724]: I1124 00:30:00.771441 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:30:00.772435 kubelet[2724]: I1124 00:30:00.772392 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "205d858a-752d-4a19-9c52-ab5937f304bb" (UID: "205d858a-752d-4a19-9c52-ab5937f304bb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:30:00.774314 kubelet[2724]: I1124 00:30:00.774286 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48452b04-e529-483e-9fab-f06eda679727-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48452b04-e529-483e-9fab-f06eda679727" (UID: "48452b04-e529-483e-9fab-f06eda679727"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867634 2724 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867657 2724 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867666 2724 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/205d858a-752d-4a19-9c52-ab5937f304bb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867677 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvktb\" (UniqueName: \"kubernetes.io/projected/205d858a-752d-4a19-9c52-ab5937f304bb-kube-api-access-cvktb\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867685 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867692 2724 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867679 kubelet[2724]: I1124 00:30:00.867700 2724 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867708 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48452b04-e529-483e-9fab-f06eda679727-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867716 2724 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867723 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867730 2724 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867737 2724 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867744 2724 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867751 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.867984 kubelet[2724]: I1124 00:30:00.867759 2724 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/205d858a-752d-4a19-9c52-ab5937f304bb-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 24 00:30:00.948302 kubelet[2724]: I1124 00:30:00.948264 2724 scope.go:117] "RemoveContainer" containerID="38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda" Nov 24 00:30:00.953564 containerd[1583]: time="2025-11-24T00:30:00.953519983Z" level=info msg="RemoveContainer for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\"" Nov 24 00:30:00.955976 systemd[1]: Removed slice kubepods-besteffort-pod48452b04_e529_483e_9fab_f06eda679727.slice - libcontainer container kubepods-besteffort-pod48452b04_e529_483e_9fab_f06eda679727.slice. Nov 24 00:30:00.961906 containerd[1583]: time="2025-11-24T00:30:00.961866394Z" level=info msg="RemoveContainer for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" returns successfully" Nov 24 00:30:00.962455 containerd[1583]: time="2025-11-24T00:30:00.962352787Z" level=error msg="ContainerStatus for \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\": not found" Nov 24 00:30:00.962506 kubelet[2724]: I1124 00:30:00.962103 2724 scope.go:117] "RemoveContainer" containerID="38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda" Nov 24 00:30:00.962556 kubelet[2724]: E1124 00:30:00.962512 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\": not found" containerID="38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda" Nov 24 00:30:00.962556 kubelet[2724]: I1124 00:30:00.962535 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda"} err="failed to get container status \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\": rpc error: code = NotFound desc = an error occurred when try to find container \"38470f8073a3c2a26335f280c60aa65ae0aa4cebe9aff121484bbca301afecda\": not found" Nov 24 00:30:00.962636 kubelet[2724]: I1124 00:30:00.962563 2724 scope.go:117] "RemoveContainer" containerID="ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4" Nov 24 00:30:00.963359 systemd[1]: Removed slice kubepods-burstable-pod205d858a_752d_4a19_9c52_ab5937f304bb.slice - libcontainer container kubepods-burstable-pod205d858a_752d_4a19_9c52_ab5937f304bb.slice. Nov 24 00:30:00.963611 systemd[1]: kubepods-burstable-pod205d858a_752d_4a19_9c52_ab5937f304bb.slice: Consumed 6.631s CPU time, 129.2M memory peak, 256K read from disk, 13.3M written to disk. Nov 24 00:30:00.964213 containerd[1583]: time="2025-11-24T00:30:00.964071967Z" level=info msg="RemoveContainer for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\"" Nov 24 00:30:00.985509 containerd[1583]: time="2025-11-24T00:30:00.985455232Z" level=info msg="RemoveContainer for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" returns successfully" Nov 24 00:30:00.985726 kubelet[2724]: I1124 00:30:00.985701 2724 scope.go:117] "RemoveContainer" containerID="5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a" Nov 24 00:30:00.987219 containerd[1583]: time="2025-11-24T00:30:00.987183739Z" level=info msg="RemoveContainer for \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\"" Nov 24 00:30:00.991664 containerd[1583]: time="2025-11-24T00:30:00.991638350Z" level=info msg="RemoveContainer for \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\" returns successfully" Nov 24 00:30:00.991763 kubelet[2724]: I1124 00:30:00.991744 2724 scope.go:117] "RemoveContainer" containerID="27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1" Nov 24 00:30:00.993571 containerd[1583]: time="2025-11-24T00:30:00.993543558Z" level=info msg="RemoveContainer for \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\"" Nov 24 00:30:00.997743 containerd[1583]: time="2025-11-24T00:30:00.997705206Z" level=info msg="RemoveContainer for \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\" returns successfully" Nov 24 00:30:00.997880 kubelet[2724]: I1124 00:30:00.997847 2724 scope.go:117] "RemoveContainer" containerID="f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be" Nov 24 00:30:00.998866 containerd[1583]: time="2025-11-24T00:30:00.998841216Z" level=info msg="RemoveContainer for \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\"" Nov 24 00:30:01.002252 containerd[1583]: time="2025-11-24T00:30:01.002221012Z" level=info msg="RemoveContainer for \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\" returns successfully" Nov 24 00:30:01.002357 kubelet[2724]: I1124 00:30:01.002332 2724 scope.go:117] "RemoveContainer" containerID="c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4" Nov 24 00:30:01.003373 containerd[1583]: time="2025-11-24T00:30:01.003352173Z" level=info msg="RemoveContainer for \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\"" Nov 24 00:30:01.012020 containerd[1583]: time="2025-11-24T00:30:01.011977739Z" level=info msg="RemoveContainer for \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\" returns successfully" Nov 24 00:30:01.012202 kubelet[2724]: I1124 00:30:01.012166 2724 scope.go:117] "RemoveContainer" containerID="ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4" Nov 24 00:30:01.012407 containerd[1583]: time="2025-11-24T00:30:01.012377586Z" level=error msg="ContainerStatus for \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\": not found" Nov 24 00:30:01.012518 kubelet[2724]: E1124 00:30:01.012488 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\": not found" containerID="ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4" Nov 24 00:30:01.012580 kubelet[2724]: I1124 00:30:01.012516 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4"} err="failed to get container status \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea5014a12b5a093105cce4040001d0476caec0a98bbd6ccecee7365c21b431d4\": not found" Nov 24 00:30:01.012580 kubelet[2724]: I1124 00:30:01.012543 2724 scope.go:117] "RemoveContainer" containerID="5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a" Nov 24 00:30:01.012680 containerd[1583]: time="2025-11-24T00:30:01.012660599Z" level=error msg="ContainerStatus for \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\": not found" Nov 24 00:30:01.012774 kubelet[2724]: E1124 00:30:01.012745 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\": not found" containerID="5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a" Nov 24 00:30:01.012774 kubelet[2724]: I1124 00:30:01.012764 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a"} err="failed to get container status \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bcb176b29ebe21ae65716cf52f0015f1cc0b99dd4ed413911b19c939321612a\": not found" Nov 24 00:30:01.012883 kubelet[2724]: I1124 00:30:01.012780 2724 scope.go:117] "RemoveContainer" containerID="27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1" Nov 24 00:30:01.012977 containerd[1583]: time="2025-11-24T00:30:01.012938612Z" level=error msg="ContainerStatus for \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\": not found" Nov 24 00:30:01.013143 kubelet[2724]: E1124 00:30:01.013114 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\": not found" containerID="27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1" Nov 24 00:30:01.013193 kubelet[2724]: I1124 00:30:01.013140 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1"} err="failed to get container status \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"27fad063dff8ecab89ca21c5bb3f04e98cd4c906648f1831f32adf96795540d1\": not found" Nov 24 00:30:01.013193 kubelet[2724]: I1124 00:30:01.013158 2724 scope.go:117] "RemoveContainer" containerID="f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be" Nov 24 00:30:01.013330 containerd[1583]: time="2025-11-24T00:30:01.013297281Z" level=error msg="ContainerStatus for \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\": not found" Nov 24 00:30:01.013426 kubelet[2724]: E1124 00:30:01.013403 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\": not found" containerID="f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be" Nov 24 00:30:01.013461 kubelet[2724]: I1124 00:30:01.013422 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be"} err="failed to get container status \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\": rpc error: code = NotFound desc = an error occurred when try to find container \"f295d3e4bb740910737fd8b3d0c9d76cb8d4bfdc49355ab4d487dc55c152c6be\": not found" Nov 24 00:30:01.013461 kubelet[2724]: I1124 00:30:01.013435 2724 scope.go:117] "RemoveContainer" containerID="c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4" Nov 24 00:30:01.013607 containerd[1583]: time="2025-11-24T00:30:01.013544745Z" level=error msg="ContainerStatus for \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\": not found" Nov 24 00:30:01.013703 kubelet[2724]: E1124 00:30:01.013661 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\": not found" containerID="c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4" Nov 24 00:30:01.013703 kubelet[2724]: I1124 00:30:01.013684 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4"} err="failed to get container status \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c01461e824b6ac8de4982ff1b9936fa2df60956c1401978c9eca518fecc357f4\": not found" Nov 24 00:30:01.078750 kubelet[2724]: E1124 00:30:01.078614 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:01.081852 kubelet[2724]: I1124 00:30:01.081801 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205d858a-752d-4a19-9c52-ab5937f304bb" path="/var/lib/kubelet/pods/205d858a-752d-4a19-9c52-ab5937f304bb/volumes" Nov 24 00:30:01.083233 kubelet[2724]: I1124 00:30:01.083204 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48452b04-e529-483e-9fab-f06eda679727" path="/var/lib/kubelet/pods/48452b04-e529-483e-9fab-f06eda679727/volumes" Nov 24 00:30:01.127071 kubelet[2724]: E1124 00:30:01.126998 2724 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 24 00:30:01.512928 systemd[1]: var-lib-kubelet-pods-48452b04\x2de529\x2d483e\x2d9fab\x2df06eda679727-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57q8b.mount: Deactivated successfully. Nov 24 00:30:01.513071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-839318a5a15ff9c80340888900ba35444f2310aaf31c608881ae3a7f60cf4509-shm.mount: Deactivated successfully. Nov 24 00:30:01.513168 systemd[1]: var-lib-kubelet-pods-205d858a\x2d752d\x2d4a19\x2d9c52\x2dab5937f304bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvktb.mount: Deactivated successfully. Nov 24 00:30:01.513272 systemd[1]: var-lib-kubelet-pods-205d858a\x2d752d\x2d4a19\x2d9c52\x2dab5937f304bb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 24 00:30:01.513375 systemd[1]: var-lib-kubelet-pods-205d858a\x2d752d\x2d4a19\x2d9c52\x2dab5937f304bb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 24 00:30:02.430885 sshd[4359]: Connection closed by 10.0.0.1 port 51996 Nov 24 00:30:02.431434 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Nov 24 00:30:02.441634 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:51996.service: Deactivated successfully. Nov 24 00:30:02.443538 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:30:02.444374 systemd-logind[1553]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:30:02.447399 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:45274.service - OpenSSH per-connection server daemon (10.0.0.1:45274). Nov 24 00:30:02.448004 systemd-logind[1553]: Removed session 24. Nov 24 00:30:02.518895 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 45274 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:30:02.520356 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:30:02.524633 systemd-logind[1553]: New session 25 of user core. Nov 24 00:30:02.538138 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 00:30:02.975591 kubelet[2724]: I1124 00:30:02.975542 2724 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T00:30:02Z","lastTransitionTime":"2025-11-24T00:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 24 00:30:03.115413 sshd[4502]: Connection closed by 10.0.0.1 port 45274 Nov 24 00:30:03.113916 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Nov 24 00:30:03.125613 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:45274.service: Deactivated successfully. Nov 24 00:30:03.130201 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 00:30:03.134175 systemd-logind[1553]: Session 25 logged out. Waiting for processes to exit. Nov 24 00:30:03.138354 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:45290.service - OpenSSH per-connection server daemon (10.0.0.1:45290). Nov 24 00:30:03.142720 systemd-logind[1553]: Removed session 25. Nov 24 00:30:03.159660 systemd[1]: Created slice kubepods-burstable-pod4e98c1f1_9c47_4b76_91ee_c508a37c3481.slice - libcontainer container kubepods-burstable-pod4e98c1f1_9c47_4b76_91ee_c508a37c3481.slice. Nov 24 00:30:03.200034 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 45290 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:30:03.201543 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:30:03.205604 systemd-logind[1553]: New session 26 of user core. Nov 24 00:30:03.215141 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 24 00:30:03.265584 sshd[4517]: Connection closed by 10.0.0.1 port 45290 Nov 24 00:30:03.266111 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Nov 24 00:30:03.275741 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:45290.service: Deactivated successfully. Nov 24 00:30:03.277599 systemd[1]: session-26.scope: Deactivated successfully. Nov 24 00:30:03.278313 systemd-logind[1553]: Session 26 logged out. Waiting for processes to exit. Nov 24 00:30:03.280990 systemd[1]: Started sshd@26-10.0.0.139:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Nov 24 00:30:03.281425 kubelet[2724]: I1124 00:30:03.281396 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-cilium-run\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281492 kubelet[2724]: I1124 00:30:03.281428 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-etc-cni-netd\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281492 kubelet[2724]: I1124 00:30:03.281447 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4e98c1f1-9c47-4b76-91ee-c508a37c3481-cilium-ipsec-secrets\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281492 kubelet[2724]: I1124 00:30:03.281461 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-cilium-cgroup\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281492 kubelet[2724]: I1124 00:30:03.281479 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-lib-modules\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281492 kubelet[2724]: I1124 00:30:03.281492 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-host-proc-sys-kernel\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281616 kubelet[2724]: I1124 00:30:03.281505 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-cni-path\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281616 kubelet[2724]: I1124 00:30:03.281521 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e98c1f1-9c47-4b76-91ee-c508a37c3481-cilium-config-path\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281616 kubelet[2724]: I1124 00:30:03.281535 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-bpf-maps\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281616 kubelet[2724]: I1124 00:30:03.281547 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp82w\" (UniqueName: \"kubernetes.io/projected/4e98c1f1-9c47-4b76-91ee-c508a37c3481-kube-api-access-vp82w\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281616 kubelet[2724]: I1124 00:30:03.281561 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e98c1f1-9c47-4b76-91ee-c508a37c3481-hubble-tls\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281616 kubelet[2724]: I1124 00:30:03.281578 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e98c1f1-9c47-4b76-91ee-c508a37c3481-clustermesh-secrets\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281742 kubelet[2724]: I1124 00:30:03.281604 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-hostproc\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281742 kubelet[2724]: I1124 00:30:03.281617 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-xtables-lock\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.281742 kubelet[2724]: I1124 00:30:03.281630 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e98c1f1-9c47-4b76-91ee-c508a37c3481-host-proc-sys-net\") pod \"cilium-dbw7v\" (UID: \"4e98c1f1-9c47-4b76-91ee-c508a37c3481\") " pod="kube-system/cilium-dbw7v" Nov 24 00:30:03.282273 systemd-logind[1553]: Removed session 26. Nov 24 00:30:03.341003 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:30:03.342703 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:30:03.346665 systemd-logind[1553]: New session 27 of user core. Nov 24 00:30:03.357124 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 24 00:30:03.464125 kubelet[2724]: E1124 00:30:03.464088 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:03.464642 containerd[1583]: time="2025-11-24T00:30:03.464594949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbw7v,Uid:4e98c1f1-9c47-4b76-91ee-c508a37c3481,Namespace:kube-system,Attempt:0,}" Nov 24 00:30:03.479509 containerd[1583]: time="2025-11-24T00:30:03.479453118Z" level=info msg="connecting to shim 21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c" address="unix:///run/containerd/s/91e11eb9095697a14f810035b2c1a20ae09e5e3d5d36a06d7c9ea2c39e9a11ba" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:30:03.505144 systemd[1]: Started cri-containerd-21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c.scope - libcontainer container 21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c. Nov 24 00:30:03.529790 containerd[1583]: time="2025-11-24T00:30:03.529701770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbw7v,Uid:4e98c1f1-9c47-4b76-91ee-c508a37c3481,Namespace:kube-system,Attempt:0,} returns sandbox id \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\"" Nov 24 00:30:03.530808 kubelet[2724]: E1124 00:30:03.530781 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:03.536605 containerd[1583]: time="2025-11-24T00:30:03.536558469Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 24 00:30:03.545024 containerd[1583]: time="2025-11-24T00:30:03.544930931Z" level=info msg="Container 2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:30:03.552444 containerd[1583]: time="2025-11-24T00:30:03.552402628Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81\"" Nov 24 00:30:03.552918 containerd[1583]: time="2025-11-24T00:30:03.552882868Z" level=info msg="StartContainer for \"2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81\"" Nov 24 00:30:03.553699 containerd[1583]: time="2025-11-24T00:30:03.553672050Z" level=info msg="connecting to shim 2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81" address="unix:///run/containerd/s/91e11eb9095697a14f810035b2c1a20ae09e5e3d5d36a06d7c9ea2c39e9a11ba" protocol=ttrpc version=3 Nov 24 00:30:03.571157 systemd[1]: Started cri-containerd-2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81.scope - libcontainer container 2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81. Nov 24 00:30:03.600544 containerd[1583]: time="2025-11-24T00:30:03.600509176Z" level=info msg="StartContainer for \"2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81\" returns successfully" Nov 24 00:30:03.608864 systemd[1]: cri-containerd-2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81.scope: Deactivated successfully. Nov 24 00:30:03.609971 containerd[1583]: time="2025-11-24T00:30:03.609914688Z" level=info msg="received container exit event container_id:\"2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81\" id:\"2cfaa02be3ba5de57ada4b3eb30095c199df1ef52ef6ca4cf181fd8d8f5c4f81\" pid:4599 exited_at:{seconds:1763944203 nanos:609657354}" Nov 24 00:30:03.967080 kubelet[2724]: E1124 00:30:03.967048 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:03.975968 containerd[1583]: time="2025-11-24T00:30:03.975915878Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 24 00:30:03.984890 containerd[1583]: time="2025-11-24T00:30:03.984831702Z" level=info msg="Container 31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:30:03.992288 containerd[1583]: time="2025-11-24T00:30:03.992238404Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217\"" Nov 24 00:30:03.992731 containerd[1583]: time="2025-11-24T00:30:03.992705499Z" level=info msg="StartContainer for \"31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217\"" Nov 24 00:30:03.993572 containerd[1583]: time="2025-11-24T00:30:03.993552572Z" level=info msg="connecting to shim 31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217" address="unix:///run/containerd/s/91e11eb9095697a14f810035b2c1a20ae09e5e3d5d36a06d7c9ea2c39e9a11ba" protocol=ttrpc version=3 Nov 24 00:30:04.020135 systemd[1]: Started cri-containerd-31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217.scope - libcontainer container 31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217. Nov 24 00:30:04.051276 containerd[1583]: time="2025-11-24T00:30:04.051219796Z" level=info msg="StartContainer for \"31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217\" returns successfully" Nov 24 00:30:04.058386 systemd[1]: cri-containerd-31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217.scope: Deactivated successfully. Nov 24 00:30:04.059642 containerd[1583]: time="2025-11-24T00:30:04.059610755Z" level=info msg="received container exit event container_id:\"31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217\" id:\"31d7dc5a7aa396b6b9f3882b30bf618dd0466ab9e3d93c20148641e5db5bb217\" pid:4645 exited_at:{seconds:1763944204 nanos:59416794}" Nov 24 00:30:04.970364 kubelet[2724]: E1124 00:30:04.970309 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:04.975151 containerd[1583]: time="2025-11-24T00:30:04.975109145Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 24 00:30:04.986261 containerd[1583]: time="2025-11-24T00:30:04.986213019Z" level=info msg="Container ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:30:04.994383 containerd[1583]: time="2025-11-24T00:30:04.994341216Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c\"" Nov 24 00:30:04.994833 containerd[1583]: time="2025-11-24T00:30:04.994807308Z" level=info msg="StartContainer for \"ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c\"" Nov 24 00:30:04.996118 containerd[1583]: time="2025-11-24T00:30:04.996077230Z" level=info msg="connecting to shim ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c" address="unix:///run/containerd/s/91e11eb9095697a14f810035b2c1a20ae09e5e3d5d36a06d7c9ea2c39e9a11ba" protocol=ttrpc version=3 Nov 24 00:30:05.023165 systemd[1]: Started cri-containerd-ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c.scope - libcontainer container ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c. Nov 24 00:30:05.228783 systemd[1]: cri-containerd-ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c.scope: Deactivated successfully. Nov 24 00:30:05.275444 containerd[1583]: time="2025-11-24T00:30:05.275379447Z" level=info msg="received container exit event container_id:\"ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c\" id:\"ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c\" pid:4689 exited_at:{seconds:1763944205 nanos:230543039}" Nov 24 00:30:05.286731 containerd[1583]: time="2025-11-24T00:30:05.286684926Z" level=info msg="StartContainer for \"ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c\" returns successfully" Nov 24 00:30:05.303278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef669f6dcd88e1ca0e0314f9a4f66aa37ca0c808245e33d50bc9df233440ae1c-rootfs.mount: Deactivated successfully. Nov 24 00:30:05.976559 kubelet[2724]: E1124 00:30:05.976525 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:05.981915 containerd[1583]: time="2025-11-24T00:30:05.981855933Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 24 00:30:05.991935 containerd[1583]: time="2025-11-24T00:30:05.991879889Z" level=info msg="Container 310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:30:05.999532 containerd[1583]: time="2025-11-24T00:30:05.999479475Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678\"" Nov 24 00:30:06.000232 containerd[1583]: time="2025-11-24T00:30:05.999982278Z" level=info msg="StartContainer for \"310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678\"" Nov 24 00:30:06.000955 containerd[1583]: time="2025-11-24T00:30:06.000908411Z" level=info msg="connecting to shim 310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678" address="unix:///run/containerd/s/91e11eb9095697a14f810035b2c1a20ae09e5e3d5d36a06d7c9ea2c39e9a11ba" protocol=ttrpc version=3 Nov 24 00:30:06.028253 systemd[1]: Started cri-containerd-310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678.scope - libcontainer container 310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678. Nov 24 00:30:06.053354 systemd[1]: cri-containerd-310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678.scope: Deactivated successfully. Nov 24 00:30:06.055383 containerd[1583]: time="2025-11-24T00:30:06.055350630Z" level=info msg="received container exit event container_id:\"310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678\" id:\"310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678\" pid:4727 exited_at:{seconds:1763944206 nanos:54441661}" Nov 24 00:30:06.062433 containerd[1583]: time="2025-11-24T00:30:06.062413581Z" level=info msg="StartContainer for \"310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678\" returns successfully" Nov 24 00:30:06.074506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-310248bcd53966bcc91e50d6b0d941a40a09360048f7c82cda1cbcd06fd48678-rootfs.mount: Deactivated successfully. Nov 24 00:30:06.128699 kubelet[2724]: E1124 00:30:06.128657 2724 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 24 00:30:06.981539 kubelet[2724]: E1124 00:30:06.981508 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:06.986129 containerd[1583]: time="2025-11-24T00:30:06.986091436Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 24 00:30:06.998289 containerd[1583]: time="2025-11-24T00:30:06.998226040Z" level=info msg="Container ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:30:07.008043 containerd[1583]: time="2025-11-24T00:30:07.007990944Z" level=info msg="CreateContainer within sandbox \"21eda074797fb75bbe56284e454de73939bfe2d4e1a2fa01131e7978ab65b06c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83\"" Nov 24 00:30:07.008498 containerd[1583]: time="2025-11-24T00:30:07.008469429Z" level=info msg="StartContainer for \"ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83\"" Nov 24 00:30:07.009384 containerd[1583]: time="2025-11-24T00:30:07.009361765Z" level=info msg="connecting to shim ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83" address="unix:///run/containerd/s/91e11eb9095697a14f810035b2c1a20ae09e5e3d5d36a06d7c9ea2c39e9a11ba" protocol=ttrpc version=3 Nov 24 00:30:07.034147 systemd[1]: Started cri-containerd-ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83.scope - libcontainer container ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83. Nov 24 00:30:07.085805 containerd[1583]: time="2025-11-24T00:30:07.085765816Z" level=info msg="StartContainer for \"ed2c6f3f91265b8903d84266e507db14a7754129613dbb58f70bac597644bd83\" returns successfully" Nov 24 00:30:07.500038 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 24 00:30:07.987382 kubelet[2724]: E1124 00:30:07.987307 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:08.038280 kubelet[2724]: I1124 00:30:08.038203 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dbw7v" podStartSLOduration=5.038183419 podStartE2EDuration="5.038183419s" podCreationTimestamp="2025-11-24 00:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:30:08.037829382 +0000 UTC m=+87.040084143" watchObservedRunningTime="2025-11-24 00:30:08.038183419 +0000 UTC m=+87.040438190" Nov 24 00:30:09.465636 kubelet[2724]: E1124 00:30:09.465589 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:10.578084 systemd-networkd[1468]: lxc_health: Link UP Nov 24 00:30:10.592158 systemd-networkd[1468]: lxc_health: Gained carrier Nov 24 00:30:11.466043 kubelet[2724]: E1124 00:30:11.465779 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:11.746416 kubelet[2724]: E1124 00:30:11.746157 2724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41180->127.0.0.1:33467: write tcp 127.0.0.1:41180->127.0.0.1:33467: write: connection reset by peer Nov 24 00:30:11.995404 kubelet[2724]: E1124 00:30:11.995360 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:12.492196 systemd-networkd[1468]: lxc_health: Gained IPv6LL Nov 24 00:30:12.997498 kubelet[2724]: E1124 00:30:12.997464 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:30:15.981835 sshd[4527]: Connection closed by 10.0.0.1 port 45300 Nov 24 00:30:15.982282 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Nov 24 00:30:15.986604 systemd[1]: sshd@26-10.0.0.139:22-10.0.0.1:45300.service: Deactivated successfully. Nov 24 00:30:15.988667 systemd[1]: session-27.scope: Deactivated successfully. Nov 24 00:30:15.989442 systemd-logind[1553]: Session 27 logged out. Waiting for processes to exit. Nov 24 00:30:15.990489 systemd-logind[1553]: Removed session 27.