Nov 4 23:50:57.463902 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:50:57.463950 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:50:57.463964 kernel: BIOS-provided physical RAM map: Nov 4 23:50:57.463971 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 23:50:57.463978 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 23:50:57.463990 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 23:50:57.464002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 4 23:50:57.464012 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 4 23:50:57.464025 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 23:50:57.464037 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 23:50:57.464047 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:50:57.464056 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 23:50:57.464065 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:50:57.464074 kernel: NX (Execute Disable) protection: active Nov 4 23:50:57.464088 kernel: APIC: Static calls initialized Nov 4 23:50:57.464099 kernel: SMBIOS 2.8 present. Nov 4 23:50:57.464111 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 4 23:50:57.464121 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:50:57.464131 kernel: Hypervisor detected: KVM Nov 4 23:50:57.464141 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 23:50:57.464151 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:50:57.464161 kernel: kvm-clock: using sched offset of 4364848321 cycles Nov 4 23:50:57.464171 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:50:57.464181 kernel: tsc: Detected 2794.750 MHz processor Nov 4 23:50:57.464190 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:50:57.464198 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:50:57.464206 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 23:50:57.464215 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 23:50:57.464223 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:50:57.464231 kernel: Using GB pages for direct mapping Nov 4 23:50:57.464239 kernel: ACPI: Early table checksum verification disabled Nov 4 23:50:57.464257 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 4 23:50:57.464266 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464274 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464282 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464291 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 4 23:50:57.464299 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464307 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464318 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464326 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:50:57.464338 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 4 23:50:57.464346 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 4 23:50:57.464354 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 4 23:50:57.464365 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 4 23:50:57.464373 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 4 23:50:57.464381 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 4 23:50:57.464389 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 4 23:50:57.464398 kernel: No NUMA configuration found Nov 4 23:50:57.464406 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 4 23:50:57.464417 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 4 23:50:57.464426 kernel: Zone ranges: Nov 4 23:50:57.464434 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:50:57.464442 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 4 23:50:57.464450 kernel: Normal empty Nov 4 23:50:57.464458 kernel: Device empty Nov 4 23:50:57.464466 kernel: Movable zone start for each node Nov 4 23:50:57.464474 kernel: Early memory node ranges Nov 4 23:50:57.464485 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 23:50:57.464493 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 4 23:50:57.464502 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 4 23:50:57.464510 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:50:57.464518 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:50:57.464526 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 4 23:50:57.464592 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:50:57.464618 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:50:57.464630 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:50:57.464641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:50:57.464655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:50:57.464667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:50:57.464678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:50:57.464690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:50:57.464705 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:50:57.464716 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:50:57.464727 kernel: TSC deadline timer available Nov 4 23:50:57.464738 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:50:57.464750 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:50:57.464761 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:50:57.464772 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:50:57.464783 kernel: CPU topo: Num. cores per package: 4 Nov 4 23:50:57.464796 kernel: CPU topo: Num. threads per package: 4 Nov 4 23:50:57.464808 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 23:50:57.464819 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:50:57.464830 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 23:50:57.464841 kernel: kvm-guest: setup PV sched yield Nov 4 23:50:57.464853 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 23:50:57.464864 kernel: Booting paravirtualized kernel on KVM Nov 4 23:50:57.464878 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:50:57.464890 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 23:50:57.464901 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 23:50:57.464912 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 23:50:57.464923 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 23:50:57.464934 kernel: kvm-guest: PV spinlocks enabled Nov 4 23:50:57.464946 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:50:57.464959 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:50:57.464986 kernel: random: crng init done Nov 4 23:50:57.464998 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:50:57.465026 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:50:57.465039 kernel: Fallback order for Node 0: 0 Nov 4 23:50:57.465051 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 4 23:50:57.465062 kernel: Policy zone: DMA32 Nov 4 23:50:57.465078 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:50:57.465094 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 23:50:57.465106 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:50:57.465117 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:50:57.465128 kernel: Dynamic Preempt: voluntary Nov 4 23:50:57.465140 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:50:57.465155 kernel: rcu: RCU event tracing is enabled. Nov 4 23:50:57.465171 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 23:50:57.465182 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:50:57.465196 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:50:57.465208 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:50:57.465218 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:50:57.465230 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 23:50:57.465241 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:50:57.465262 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:50:57.465277 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:50:57.465288 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 23:50:57.465300 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:50:57.465321 kernel: Console: colour VGA+ 80x25 Nov 4 23:50:57.465335 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:50:57.465347 kernel: ACPI: Core revision 20240827 Nov 4 23:50:57.465359 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:50:57.465370 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:50:57.465382 kernel: x2apic enabled Nov 4 23:50:57.465394 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:50:57.465411 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 23:50:57.465424 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 23:50:57.465436 kernel: kvm-guest: setup PV IPIs Nov 4 23:50:57.465450 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:50:57.465462 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 23:50:57.465474 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 4 23:50:57.465486 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:50:57.465498 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 23:50:57.465510 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 23:50:57.465522 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:50:57.465553 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:50:57.465566 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:50:57.465578 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 23:50:57.465590 kernel: active return thunk: retbleed_return_thunk Nov 4 23:50:57.465601 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 23:50:57.465613 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:50:57.465624 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:50:57.465640 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 23:50:57.465653 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 23:50:57.465665 kernel: active return thunk: srso_return_thunk Nov 4 23:50:57.465676 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 23:50:57.465688 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:50:57.465699 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:50:57.465710 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:50:57.465725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:50:57.465737 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 23:50:57.465748 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:50:57.465759 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:50:57.465770 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:50:57.465782 kernel: landlock: Up and running. Nov 4 23:50:57.465793 kernel: SELinux: Initializing. Nov 4 23:50:57.465810 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:50:57.465821 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:50:57.465833 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 23:50:57.465844 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 23:50:57.465856 kernel: ... version: 0 Nov 4 23:50:57.465867 kernel: ... bit width: 48 Nov 4 23:50:57.465879 kernel: ... generic registers: 6 Nov 4 23:50:57.465893 kernel: ... value mask: 0000ffffffffffff Nov 4 23:50:57.465904 kernel: ... max period: 00007fffffffffff Nov 4 23:50:57.465915 kernel: ... fixed-purpose events: 0 Nov 4 23:50:57.465927 kernel: ... event mask: 000000000000003f Nov 4 23:50:57.465939 kernel: signal: max sigframe size: 1776 Nov 4 23:50:57.465950 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:50:57.465963 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:50:57.465974 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:50:57.465990 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:50:57.466005 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:50:57.466040 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 23:50:57.466051 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 23:50:57.466063 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 4 23:50:57.466075 kernel: Memory: 2451436K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114376K reserved, 0K cma-reserved) Nov 4 23:50:57.466087 kernel: devtmpfs: initialized Nov 4 23:50:57.466102 kernel: x86/mm: Memory block size: 128MB Nov 4 23:50:57.466113 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:50:57.466125 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 23:50:57.466136 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:50:57.466152 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:50:57.466160 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:50:57.466169 kernel: audit: type=2000 audit(1762300253.703:1): state=initialized audit_enabled=0 res=1 Nov 4 23:50:57.466180 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:50:57.466188 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:50:57.466197 kernel: cpuidle: using governor menu Nov 4 23:50:57.466205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:50:57.466214 kernel: dca service started, version 1.12.1 Nov 4 23:50:57.466222 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 23:50:57.466231 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 23:50:57.466242 kernel: PCI: Using configuration type 1 for base access Nov 4 23:50:57.466259 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:50:57.466267 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:50:57.466276 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:50:57.466284 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:50:57.466294 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:50:57.466302 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:50:57.466313 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:50:57.466322 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:50:57.466330 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:50:57.466339 kernel: ACPI: Interpreter enabled Nov 4 23:50:57.466347 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 23:50:57.466356 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:50:57.466365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:50:57.466375 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:50:57.466384 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 23:50:57.466392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:50:57.466761 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:50:57.467010 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 23:50:57.467241 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 23:50:57.467276 kernel: PCI host bridge to bus 0000:00 Nov 4 23:50:57.467518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:50:57.467744 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:50:57.467946 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:50:57.468159 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 4 23:50:57.468396 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 23:50:57.468646 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 4 23:50:57.468856 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:50:57.469102 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:50:57.469356 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:50:57.469628 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 23:50:57.469905 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 23:50:57.470129 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 23:50:57.470356 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:50:57.470616 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:50:57.470849 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 4 23:50:57.471074 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 23:50:57.471313 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 23:50:57.471564 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:50:57.471795 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 4 23:50:57.472024 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 23:50:57.472259 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 23:50:57.472497 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:50:57.472755 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 4 23:50:57.472979 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 4 23:50:57.473202 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 4 23:50:57.473437 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 23:50:57.473719 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:50:57.473954 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 23:50:57.474192 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 23:50:57.474441 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 4 23:50:57.474698 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 4 23:50:57.474939 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 23:50:57.475168 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 23:50:57.475192 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:50:57.475205 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:50:57.475217 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:50:57.475229 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:50:57.475242 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 23:50:57.475265 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 23:50:57.475281 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 23:50:57.475294 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 23:50:57.475305 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 23:50:57.475317 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 23:50:57.475330 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 23:50:57.475342 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 23:50:57.475354 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 23:50:57.475367 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 23:50:57.475382 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 23:50:57.475394 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 23:50:57.475406 kernel: iommu: Default domain type: Translated Nov 4 23:50:57.475418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:50:57.475431 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:50:57.475443 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:50:57.475456 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 23:50:57.475471 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 4 23:50:57.475741 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 23:50:57.475976 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 23:50:57.476210 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:50:57.476227 kernel: vgaarb: loaded Nov 4 23:50:57.476240 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:50:57.476267 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:50:57.476279 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:50:57.476291 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:50:57.476303 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:50:57.476315 kernel: pnp: PnP ACPI init Nov 4 23:50:57.476586 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 23:50:57.476608 kernel: pnp: PnP ACPI: found 6 devices Nov 4 23:50:57.476625 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:50:57.476637 kernel: NET: Registered PF_INET protocol family Nov 4 23:50:57.476651 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:50:57.476664 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 23:50:57.476676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:50:57.476689 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:50:57.476701 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 23:50:57.476717 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 23:50:57.476729 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:50:57.476741 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:50:57.476754 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:50:57.476766 kernel: NET: Registered PF_XDP protocol family Nov 4 23:50:57.476967 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:50:57.477183 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:50:57.477417 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:50:57.477649 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 4 23:50:57.477859 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 23:50:57.478067 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 4 23:50:57.478085 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:50:57.478098 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 23:50:57.478111 kernel: Initialise system trusted keyrings Nov 4 23:50:57.478129 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 23:50:57.478141 kernel: Key type asymmetric registered Nov 4 23:50:57.478153 kernel: Asymmetric key parser 'x509' registered Nov 4 23:50:57.478166 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:50:57.478179 kernel: io scheduler mq-deadline registered Nov 4 23:50:57.478191 kernel: io scheduler kyber registered Nov 4 23:50:57.478203 kernel: io scheduler bfq registered Nov 4 23:50:57.478219 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:50:57.478232 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 23:50:57.478245 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 23:50:57.478267 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 23:50:57.478280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:50:57.478292 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:50:57.478304 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:50:57.478320 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:50:57.478332 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:50:57.478611 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 23:50:57.478632 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:50:57.478857 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 23:50:57.479075 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T23:50:55 UTC (1762300255) Nov 4 23:50:57.479310 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 23:50:57.479329 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 23:50:57.479341 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:50:57.479354 kernel: Segment Routing with IPv6 Nov 4 23:50:57.479366 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:50:57.479379 kernel: hpet: Lost 1 RTC interrupts Nov 4 23:50:57.479391 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:50:57.479408 kernel: Key type dns_resolver registered Nov 4 23:50:57.479420 kernel: IPI shorthand broadcast: enabled Nov 4 23:50:57.479432 kernel: sched_clock: Marking stable (1255004448, 272755432)->(1750863098, -223103218) Nov 4 23:50:57.479445 kernel: registered taskstats version 1 Nov 4 23:50:57.479457 kernel: Loading compiled-in X.509 certificates Nov 4 23:50:57.479470 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:50:57.479483 kernel: Demotion targets for Node 0: null Nov 4 23:50:57.479498 kernel: Key type .fscrypt registered Nov 4 23:50:57.479510 kernel: Key type fscrypt-provisioning registered Nov 4 23:50:57.479529 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:50:57.479558 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:50:57.479571 kernel: ima: No architecture policies found Nov 4 23:50:57.479583 kernel: clk: Disabling unused clocks Nov 4 23:50:57.479596 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:50:57.479608 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:50:57.479625 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:50:57.479637 kernel: Run /init as init process Nov 4 23:50:57.479649 kernel: with arguments: Nov 4 23:50:57.479661 kernel: /init Nov 4 23:50:57.479673 kernel: with environment: Nov 4 23:50:57.479685 kernel: HOME=/ Nov 4 23:50:57.479697 kernel: TERM=linux Nov 4 23:50:57.479712 kernel: SCSI subsystem initialized Nov 4 23:50:57.479728 kernel: libata version 3.00 loaded. Nov 4 23:50:57.479999 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 23:50:57.480022 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 23:50:57.480237 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 23:50:57.480463 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 23:50:57.480761 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 23:50:57.481005 kernel: scsi host0: ahci Nov 4 23:50:57.481237 kernel: scsi host1: ahci Nov 4 23:50:57.481478 kernel: scsi host2: ahci Nov 4 23:50:57.481731 kernel: scsi host3: ahci Nov 4 23:50:57.481966 kernel: scsi host4: ahci Nov 4 23:50:57.482198 kernel: scsi host5: ahci Nov 4 23:50:57.482216 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 4 23:50:57.482229 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 4 23:50:57.482243 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 4 23:50:57.482265 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 4 23:50:57.482283 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 4 23:50:57.482296 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 4 23:50:57.482309 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 23:50:57.482322 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 23:50:57.482335 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 23:50:57.482348 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 23:50:57.482366 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 23:50:57.482382 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 23:50:57.482395 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:50:57.482407 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 23:50:57.482420 kernel: ata3.00: applying bridge limits Nov 4 23:50:57.482433 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:50:57.482445 kernel: ata3.00: configured for UDMA/100 Nov 4 23:50:57.482719 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 23:50:57.482969 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 23:50:57.483181 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 23:50:57.483197 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:50:57.483210 kernel: GPT:16515071 != 27000831 Nov 4 23:50:57.483223 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:50:57.483235 kernel: GPT:16515071 != 27000831 Nov 4 23:50:57.483257 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:50:57.483274 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:50:57.483509 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 23:50:57.483526 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:50:57.483776 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 23:50:57.483793 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:50:57.483806 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:50:57.483824 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:50:57.483837 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:50:57.483852 kernel: raid6: avx2x4 gen() 24993 MB/s Nov 4 23:50:57.483865 kernel: raid6: avx2x2 gen() 22233 MB/s Nov 4 23:50:57.483878 kernel: raid6: avx2x1 gen() 22116 MB/s Nov 4 23:50:57.483893 kernel: raid6: using algorithm avx2x4 gen() 24993 MB/s Nov 4 23:50:57.483905 kernel: raid6: .... xor() 7188 MB/s, rmw enabled Nov 4 23:50:57.483918 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:50:57.483931 kernel: xor: automatically using best checksumming function avx Nov 4 23:50:57.483946 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:50:57.483960 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 4 23:50:57.483973 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:50:57.483989 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:50:57.484002 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:50:57.484015 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:50:57.484028 kernel: loop: module loaded Nov 4 23:50:57.484041 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:50:57.484054 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:50:57.484068 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:50:57.484087 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:50:57.484102 systemd[1]: Detected virtualization kvm. Nov 4 23:50:57.484115 systemd[1]: Detected architecture x86-64. Nov 4 23:50:57.484128 systemd[1]: Running in initrd. Nov 4 23:50:57.484141 systemd[1]: No hostname configured, using default hostname. Nov 4 23:50:57.484158 systemd[1]: Hostname set to . Nov 4 23:50:57.484171 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:50:57.484185 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:50:57.484198 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:50:57.484212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:50:57.484226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:50:57.484240 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:50:57.484267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:50:57.484281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:50:57.484295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:50:57.484309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:50:57.484323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:50:57.484339 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:50:57.484353 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:50:57.484367 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:50:57.484380 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:50:57.484393 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:50:57.484407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:50:57.484420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:50:57.484437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:50:57.484450 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:50:57.484464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:50:57.484478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:50:57.484492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:50:57.484506 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:50:57.484521 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:50:57.484550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:50:57.484565 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:50:57.484579 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:50:57.484593 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:50:57.484607 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:50:57.484620 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:50:57.484634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:50:57.484650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:50:57.484671 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:50:57.484685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:50:57.484701 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:50:57.484715 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:50:57.484769 systemd-journald[314]: Collecting audit messages is disabled. Nov 4 23:50:57.484804 systemd-journald[314]: Journal started Nov 4 23:50:57.484830 systemd-journald[314]: Runtime Journal (/run/log/journal/205cde831c2f4dfdb22c99030d44d8a4) is 6M, max 48.3M, 42.2M free. Nov 4 23:50:57.487567 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:50:57.488528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:50:57.497102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:50:57.503757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:50:57.622865 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:50:57.622917 kernel: Bridge firewalling registered Nov 4 23:50:57.511741 systemd-modules-load[317]: Inserted module 'br_netfilter' Nov 4 23:50:57.619986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:50:57.627647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:50:57.647867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:50:57.653039 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:50:57.658853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:50:57.661003 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:50:57.671089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:50:57.676454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:50:57.678892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:50:57.691744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:50:57.697412 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:50:57.746855 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:50:57.747276 systemd-resolved[350]: Positive Trust Anchors: Nov 4 23:50:57.747285 systemd-resolved[350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:50:57.747290 systemd-resolved[350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:50:57.747321 systemd-resolved[350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:50:57.778131 systemd-resolved[350]: Defaulting to hostname 'linux'. Nov 4 23:50:57.779833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:50:57.785028 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:50:57.901596 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:50:57.917593 kernel: iscsi: registered transport (tcp) Nov 4 23:50:57.948152 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:50:57.948298 kernel: QLogic iSCSI HBA Driver Nov 4 23:50:57.978660 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:50:58.006177 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:50:58.012385 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:50:58.085697 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:50:58.091340 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:50:58.095764 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:50:58.139028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:50:58.144967 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:50:58.187371 systemd-udevd[600]: Using default interface naming scheme 'v257'. Nov 4 23:50:58.206388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:50:58.210790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:50:58.244886 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Nov 4 23:50:58.265860 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:50:58.269981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:50:58.305719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:50:58.311775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:50:58.350830 systemd-networkd[725]: lo: Link UP Nov 4 23:50:58.350841 systemd-networkd[725]: lo: Gained carrier Nov 4 23:50:58.351947 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:50:58.355744 systemd[1]: Reached target network.target - Network. Nov 4 23:50:58.428238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:50:58.437783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:50:58.617398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:50:58.622977 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:50:58.654255 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:50:58.682587 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:50:58.693320 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:50:58.698855 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:50:58.696523 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:50:58.696528 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:50:58.706630 kernel: AES CTR mode by8 optimization enabled Nov 4 23:50:58.706161 systemd-networkd[725]: eth0: Link UP Nov 4 23:50:58.707936 systemd-networkd[725]: eth0: Gained carrier Nov 4 23:50:58.707947 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:50:58.724479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:50:58.728466 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:50:58.731194 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:50:58.733456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:50:58.734347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:50:58.741259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:50:58.760971 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:50:58.766468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:50:58.767454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:50:58.769123 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:50:58.775819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:50:58.879479 disk-uuid[844]: Primary Header is updated. Nov 4 23:50:58.879479 disk-uuid[844]: Secondary Entries is updated. Nov 4 23:50:58.879479 disk-uuid[844]: Secondary Header is updated. Nov 4 23:50:58.914308 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:50:59.008152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:50:59.947878 disk-uuid[846]: Warning: The kernel is still using the old partition table. Nov 4 23:50:59.947878 disk-uuid[846]: The new table will be used at the next reboot or after you Nov 4 23:50:59.947878 disk-uuid[846]: run partprobe(8) or kpartx(8) Nov 4 23:50:59.947878 disk-uuid[846]: The operation has completed successfully. Nov 4 23:50:59.968146 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:50:59.968332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:50:59.973494 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:51:00.018567 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Nov 4 23:51:00.018653 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:00.022087 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:00.027086 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:51:00.027130 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:51:00.037578 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:00.040306 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:51:00.044128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:51:00.491461 systemd-networkd[725]: eth0: Gained IPv6LL Nov 4 23:51:00.659341 ignition[887]: Ignition 2.22.0 Nov 4 23:51:00.659361 ignition[887]: Stage: fetch-offline Nov 4 23:51:00.659408 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:00.659420 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:00.659566 ignition[887]: parsed url from cmdline: "" Nov 4 23:51:00.659575 ignition[887]: no config URL provided Nov 4 23:51:00.659583 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:51:00.659601 ignition[887]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:51:00.659659 ignition[887]: op(1): [started] loading QEMU firmware config module Nov 4 23:51:00.659664 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 23:51:00.686876 ignition[887]: op(1): [finished] loading QEMU firmware config module Nov 4 23:51:00.770409 ignition[887]: parsing config with SHA512: e7143625f915044221116e9923672bab7e16c6d194b2af27c615e4543a3318631ee3a580e7284e54678520454c585ac17ad2fefbd7327164e92cfc1947916a03 Nov 4 23:51:00.778723 unknown[887]: fetched base config from "system" Nov 4 23:51:00.779265 ignition[887]: fetch-offline: fetch-offline passed Nov 4 23:51:00.778744 unknown[887]: fetched user config from "qemu" Nov 4 23:51:00.779362 ignition[887]: Ignition finished successfully Nov 4 23:51:00.785289 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:51:00.797159 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 23:51:00.798397 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:51:00.924529 ignition[897]: Ignition 2.22.0 Nov 4 23:51:00.924563 ignition[897]: Stage: kargs Nov 4 23:51:00.924808 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:00.924824 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:00.927243 ignition[897]: kargs: kargs passed Nov 4 23:51:00.927320 ignition[897]: Ignition finished successfully Nov 4 23:51:00.932354 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:51:00.937931 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:51:00.997069 ignition[905]: Ignition 2.22.0 Nov 4 23:51:00.997084 ignition[905]: Stage: disks Nov 4 23:51:00.997399 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:00.997417 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:00.998711 ignition[905]: disks: disks passed Nov 4 23:51:00.998762 ignition[905]: Ignition finished successfully Nov 4 23:51:01.008477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:51:01.009643 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:51:01.010324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:51:01.046896 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:51:01.048028 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:51:01.048718 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:51:01.057307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:51:01.158171 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:51:01.213828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:51:01.218739 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:51:01.447663 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:51:01.448683 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:51:01.450736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:51:01.454646 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:51:01.458881 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:51:01.460455 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 23:51:01.460558 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:51:01.460612 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:51:01.481591 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Nov 4 23:51:01.482525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:51:01.490746 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:01.490776 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:01.486712 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:51:01.502424 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:51:01.502512 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:51:01.504452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:51:01.578041 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:51:01.582856 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:51:01.589597 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:51:01.632164 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:51:01.807135 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:51:01.811189 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:51:01.814267 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:51:01.837467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:51:01.840060 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:01.856742 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:51:01.897306 ignition[1038]: INFO : Ignition 2.22.0 Nov 4 23:51:01.897306 ignition[1038]: INFO : Stage: mount Nov 4 23:51:01.901340 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:01.901340 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:01.901340 ignition[1038]: INFO : mount: mount passed Nov 4 23:51:01.901340 ignition[1038]: INFO : Ignition finished successfully Nov 4 23:51:01.904217 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:51:01.909608 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:51:02.451170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:51:02.489585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Nov 4 23:51:02.493276 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:02.493302 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:02.496570 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:51:02.496598 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:51:02.499497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:51:02.560003 ignition[1067]: INFO : Ignition 2.22.0 Nov 4 23:51:02.560003 ignition[1067]: INFO : Stage: files Nov 4 23:51:02.563342 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:02.563342 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:02.563342 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:51:02.563342 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:51:02.563342 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:51:02.575168 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:51:02.575168 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:51:02.575168 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:51:02.575168 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:51:02.575168 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:51:02.566885 unknown[1067]: wrote ssh authorized keys file for user: core Nov 4 23:51:02.621385 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:51:02.718760 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:51:02.718760 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:51:02.726226 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 4 23:51:02.962503 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 23:51:03.094161 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:51:03.094161 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:51:03.100672 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:51:03.165687 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:51:03.169236 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:51:03.169236 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:51:03.244369 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:51:03.257454 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:51:03.257454 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:51:03.650429 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 23:51:04.356724 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:51:04.356724 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 23:51:04.364064 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:51:04.369371 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:51:04.369371 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 23:51:04.369371 ignition[1067]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 23:51:04.369371 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:51:04.382461 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:51:04.382461 ignition[1067]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 23:51:04.382461 ignition[1067]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 23:51:04.400707 ignition[1067]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:51:04.406411 ignition[1067]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:51:04.410009 ignition[1067]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 23:51:04.410009 ignition[1067]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:51:04.414894 ignition[1067]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:51:04.414894 ignition[1067]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:51:04.414894 ignition[1067]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:51:04.414894 ignition[1067]: INFO : files: files passed Nov 4 23:51:04.414894 ignition[1067]: INFO : Ignition finished successfully Nov 4 23:51:04.424023 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:51:04.427889 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:51:04.430281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:51:04.452723 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:51:04.452904 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:51:04.458612 initrd-setup-root-after-ignition[1099]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 23:51:04.463814 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:51:04.463814 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:51:04.469295 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:51:04.473725 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:51:04.478303 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:51:04.479985 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:51:04.549064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:51:04.549242 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:51:04.553230 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:51:04.556888 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:51:04.561442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:51:04.565872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:51:04.609714 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:51:04.615406 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:51:04.648201 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:51:04.648401 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:51:04.652454 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:51:04.653367 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:51:04.658633 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:51:04.658826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:51:04.664559 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:51:04.665426 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:51:04.672954 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:51:04.676751 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:51:04.677653 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:51:04.678564 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:51:04.680500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:51:04.681045 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:51:04.699255 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:51:04.700180 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:51:04.704640 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:51:04.705636 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:51:04.705898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:51:04.712227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:51:04.715758 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:51:04.717013 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:51:04.717298 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:51:04.722014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:51:04.722154 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:51:04.729060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:51:04.729192 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:51:04.732493 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:51:04.733281 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:51:04.738965 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:51:04.742615 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:51:04.743496 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:51:04.750168 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:51:04.750316 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:51:04.753160 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:51:04.753258 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:51:04.757398 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:51:04.757706 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:51:04.758508 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:51:04.758688 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:51:04.766032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:51:04.768040 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:51:04.771230 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:51:04.771455 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:51:04.778508 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:51:04.778696 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:51:04.779311 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:51:04.779420 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:51:04.794951 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:51:04.864596 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:51:04.892977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:51:04.905864 ignition[1125]: INFO : Ignition 2.22.0 Nov 4 23:51:04.905864 ignition[1125]: INFO : Stage: umount Nov 4 23:51:04.908961 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:04.908961 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:04.908961 ignition[1125]: INFO : umount: umount passed Nov 4 23:51:04.908961 ignition[1125]: INFO : Ignition finished successfully Nov 4 23:51:04.909978 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:51:04.910155 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:51:04.912912 systemd[1]: Stopped target network.target - Network. Nov 4 23:51:04.916432 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:51:04.916554 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:51:04.920669 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:51:04.920743 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:51:04.922172 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:51:04.922233 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:51:04.926921 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:51:04.926999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:51:04.929873 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:51:04.936240 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:51:04.953293 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:51:04.953481 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:51:04.963929 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:51:04.964137 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:51:04.972265 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:51:04.975882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:51:04.975949 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:51:04.978301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:51:04.984120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:51:04.984224 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:51:04.988055 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:51:04.988130 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:51:04.989259 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:51:04.989341 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:51:04.990232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:51:05.023886 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:51:05.024173 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:51:05.025963 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:51:05.026037 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:51:05.030485 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:51:05.030602 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:51:05.051139 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:51:05.051224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:51:05.059085 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:51:05.059169 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:51:05.067205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:51:05.067285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:51:05.076639 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:51:05.077338 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:51:05.077415 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:51:05.080990 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:51:05.081061 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:51:05.081524 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 23:51:05.081598 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:51:05.117620 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:51:05.117753 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:51:05.118387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:51:05.118448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:05.120408 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:51:05.161868 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:51:05.166215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:51:05.166396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:51:05.176319 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:51:05.180710 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:51:05.188056 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:51:05.188222 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:51:05.214904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:51:05.219577 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:51:05.259030 systemd[1]: Switching root. Nov 4 23:51:05.308963 systemd-journald[314]: Journal stopped Nov 4 23:51:07.836400 systemd-journald[314]: Received SIGTERM from PID 1 (systemd). Nov 4 23:51:07.836480 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:51:07.836495 kernel: SELinux: policy capability open_perms=1 Nov 4 23:51:07.836508 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:51:07.836520 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:51:07.836545 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:51:07.836562 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:51:07.836574 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:51:07.836587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:51:07.836613 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:51:07.836625 kernel: audit: type=1403 audit(1762300266.646:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:51:07.836640 systemd[1]: Successfully loaded SELinux policy in 81.952ms. Nov 4 23:51:07.836661 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.471ms. Nov 4 23:51:07.836682 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:51:07.836696 systemd[1]: Detected virtualization kvm. Nov 4 23:51:07.836708 systemd[1]: Detected architecture x86-64. Nov 4 23:51:07.836721 systemd[1]: Detected first boot. Nov 4 23:51:07.836734 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:51:07.836747 kernel: Guest personality initialized and is inactive Nov 4 23:51:07.836759 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:51:07.836777 kernel: Initialized host personality Nov 4 23:51:07.836790 zram_generator::config[1171]: No configuration found. Nov 4 23:51:07.836806 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:51:07.836818 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:51:07.836831 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:51:07.836844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:51:07.836857 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:51:07.836887 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:51:07.836901 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:51:07.836914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:51:07.836927 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:51:07.836940 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:51:07.836953 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:51:07.836971 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:51:07.837005 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:51:07.837019 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:51:07.837032 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:51:07.837045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:51:07.837058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:51:07.837071 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:51:07.837094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:51:07.837110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:51:07.837124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:51:07.837138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:51:07.837151 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:51:07.837164 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:51:07.837177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:51:07.837197 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:51:07.837210 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:51:07.837226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:51:07.837251 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:51:07.837269 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:51:07.837284 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:51:07.837297 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:51:07.837328 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:51:07.837343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:51:07.837356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:51:07.837369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:51:07.837382 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:51:07.837398 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:51:07.837411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:51:07.837435 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:51:07.837452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:07.837466 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:51:07.837479 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:51:07.837493 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:51:07.837512 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:51:07.837528 systemd[1]: Reached target machines.target - Containers. Nov 4 23:51:07.837687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:51:07.837703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:07.837716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:51:07.837728 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:51:07.837742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:07.837754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:51:07.837768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:07.837790 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:51:07.837804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:07.837821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:51:07.837834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:51:07.837849 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:51:07.837862 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:51:07.837892 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:51:07.837906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:07.837919 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:51:07.837932 kernel: ACPI: bus type drm_connector registered Nov 4 23:51:07.837948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:51:07.837964 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:51:07.837981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:51:07.838007 kernel: fuse: init (API version 7.41) Nov 4 23:51:07.838049 systemd-journald[1235]: Collecting audit messages is disabled. Nov 4 23:51:07.838074 systemd-journald[1235]: Journal started Nov 4 23:51:07.838096 systemd-journald[1235]: Runtime Journal (/run/log/journal/205cde831c2f4dfdb22c99030d44d8a4) is 6M, max 48.3M, 42.2M free. Nov 4 23:51:07.447751 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:51:07.467899 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:51:07.468521 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:51:07.843873 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:51:07.848580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:51:07.865574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:07.885338 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:51:07.874035 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:51:07.876234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:51:07.878418 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:51:07.880366 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:51:07.882566 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:51:07.893564 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:51:07.895928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:51:07.898743 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:51:07.899016 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:51:07.901601 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:07.901832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:07.904272 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:51:07.904525 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:51:07.906974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:07.907197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:07.909845 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:51:07.910084 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:51:07.942639 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:07.942927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:07.945436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:51:07.948570 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:51:07.954718 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:51:07.957904 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:51:07.975375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:51:07.999067 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:51:08.001457 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:51:08.004966 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:51:08.008072 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:51:08.010123 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:51:08.010159 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:51:08.020305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:51:08.022829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:08.029382 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:51:08.033624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:51:08.035937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:51:08.039794 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:51:08.045084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:51:08.047828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:51:08.051753 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:51:08.102858 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:51:08.107264 systemd-journald[1235]: Time spent on flushing to /var/log/journal/205cde831c2f4dfdb22c99030d44d8a4 is 18.914ms for 971 entries. Nov 4 23:51:08.107264 systemd-journald[1235]: System Journal (/var/log/journal/205cde831c2f4dfdb22c99030d44d8a4) is 8M, max 163.5M, 155.5M free. Nov 4 23:51:08.534977 systemd-journald[1235]: Received client request to flush runtime journal. Nov 4 23:51:08.535068 kernel: loop1: detected capacity change from 0 to 110984 Nov 4 23:51:08.535101 kernel: loop2: detected capacity change from 0 to 229808 Nov 4 23:51:08.535126 kernel: loop3: detected capacity change from 0 to 128048 Nov 4 23:51:08.535148 kernel: loop4: detected capacity change from 0 to 110984 Nov 4 23:51:08.535169 kernel: loop5: detected capacity change from 0 to 229808 Nov 4 23:51:08.118002 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:51:08.121143 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:51:08.123745 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:51:08.172708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:51:08.176058 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:51:08.178978 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:51:08.182855 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:51:08.220200 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Nov 4 23:51:08.220219 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Nov 4 23:51:08.227194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:51:08.231442 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:51:08.536839 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:51:08.556927 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:51:08.577488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:51:08.580580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:51:08.613090 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Nov 4 23:51:08.613117 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Nov 4 23:51:08.619760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:51:08.666324 kernel: loop6: detected capacity change from 0 to 128048 Nov 4 23:51:08.671747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:51:08.679379 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 23:51:08.689659 (sd-merge)[1306]: Merged extensions into '/usr'. Nov 4 23:51:08.696087 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:51:08.696107 systemd[1]: Reloading... Nov 4 23:51:08.807582 zram_generator::config[1359]: No configuration found. Nov 4 23:51:08.912017 systemd-resolved[1311]: Positive Trust Anchors: Nov 4 23:51:08.912037 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:51:08.912042 systemd-resolved[1311]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:51:08.912081 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:51:08.917382 systemd-resolved[1311]: Defaulting to hostname 'linux'. Nov 4 23:51:09.013473 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:51:09.014072 systemd[1]: Reloading finished in 317 ms. Nov 4 23:51:09.046184 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:51:09.048396 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:51:09.050683 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:51:09.053238 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:51:09.059907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:51:09.085800 systemd[1]: Starting ensure-sysext.service... Nov 4 23:51:09.088695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:51:09.115971 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:51:09.116014 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:51:09.116364 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:51:09.116672 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:51:09.117869 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:51:09.118164 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 4 23:51:09.118244 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 4 23:51:09.172178 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:51:09.172204 systemd-tmpfiles[1389]: Skipping /boot Nov 4 23:51:09.176816 systemd[1]: Reload requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:51:09.176852 systemd[1]: Reloading... Nov 4 23:51:09.188321 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:51:09.188336 systemd-tmpfiles[1389]: Skipping /boot Nov 4 23:51:09.249574 zram_generator::config[1419]: No configuration found. Nov 4 23:51:09.435908 systemd[1]: Reloading finished in 258 ms. Nov 4 23:51:09.457587 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:51:09.480074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:51:09.491187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:51:09.494089 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:51:09.514220 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:51:09.518009 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:51:09.523111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:51:09.527241 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:51:09.533334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:09.533593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:09.545911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:09.568768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:09.586677 systemd-udevd[1468]: Using default interface naming scheme 'v257'. Nov 4 23:51:09.597798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:09.601571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:09.601705 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:09.601804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:09.606806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:09.613694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:09.616296 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:09.616562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:09.619157 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:09.619391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:09.628012 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:51:09.634960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:09.635331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:09.639795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:09.645822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:09.659627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:09.665322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:09.665643 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:09.665792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:09.668351 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:51:09.677533 augenrules[1493]: No rules Nov 4 23:51:09.674359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:09.675310 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:09.678803 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:51:09.688966 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:51:09.691616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:51:09.699079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:09.699406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:09.702592 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:09.702894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:09.706271 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:51:09.721027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:09.723388 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:51:09.725727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:09.727911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:09.733070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:51:09.740031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:09.744683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:09.746524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:09.746676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:09.749967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:51:09.753665 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:51:09.753816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:09.755522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:09.758750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:09.761388 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:51:09.761622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:51:09.764000 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:09.764252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:09.768676 augenrules[1522]: /sbin/augenrules: No change Nov 4 23:51:09.778168 systemd[1]: Finished ensure-sysext.service. Nov 4 23:51:09.785013 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:51:09.791786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:51:09.799874 augenrules[1552]: No rules Nov 4 23:51:09.793948 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:51:09.833322 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:51:09.833662 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:51:09.836738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:09.836983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:09.846464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:51:09.930967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:51:09.943395 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:51:10.019695 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:51:10.022571 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:51:10.030573 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:51:10.035425 systemd-networkd[1532]: lo: Link UP Nov 4 23:51:10.035437 systemd-networkd[1532]: lo: Gained carrier Nov 4 23:51:10.037438 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:51:10.037553 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:51:10.038630 systemd-networkd[1532]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:51:10.038647 systemd-networkd[1532]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:51:10.039831 systemd-networkd[1532]: eth0: Link UP Nov 4 23:51:10.040021 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:51:10.040227 systemd-networkd[1532]: eth0: Gained carrier Nov 4 23:51:10.040254 systemd-networkd[1532]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:51:10.042274 systemd[1]: Reached target network.target - Network. Nov 4 23:51:10.044313 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:51:10.048166 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:51:10.053645 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:51:10.057626 systemd-networkd[1532]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:51:10.062687 systemd-timesyncd[1557]: Network configuration changed, trying to establish connection. Nov 4 23:51:10.064718 systemd-timesyncd[1557]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 23:51:10.064865 systemd-timesyncd[1557]: Initial clock synchronization to Tue 2025-11-04 23:51:09.819315 UTC. Nov 4 23:51:10.073890 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 23:51:10.076604 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:51:10.131080 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:51:10.236934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:10.367420 kernel: kvm_amd: TSC scaling supported Nov 4 23:51:10.367561 kernel: kvm_amd: Nested Virtualization enabled Nov 4 23:51:10.367586 kernel: kvm_amd: Nested Paging enabled Nov 4 23:51:10.369406 kernel: kvm_amd: LBR virtualization supported Nov 4 23:51:10.369453 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 23:51:10.370639 kernel: kvm_amd: Virtual GIF supported Nov 4 23:51:10.585668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:10.601604 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:51:10.623009 ldconfig[1460]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:51:10.941609 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:51:10.947077 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:51:10.974154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:51:10.987918 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:51:10.989905 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:51:10.992038 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:51:10.994129 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:51:10.996464 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:51:10.998436 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:51:11.000524 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:51:11.002608 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:51:11.002661 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:51:11.021128 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:51:11.024140 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:51:11.027780 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:51:11.032232 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:51:11.034544 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:51:11.036745 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:51:11.048289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:51:11.050762 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:51:11.053358 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:51:11.055790 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:51:11.057306 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:51:11.058800 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:51:11.058832 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:51:11.060065 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:51:11.062768 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:51:11.065264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:51:11.093868 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:51:11.096884 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:51:11.098526 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:51:11.102577 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:51:11.104008 jq[1608]: false Nov 4 23:51:11.105725 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:51:11.108614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:51:11.111891 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Refreshing passwd entry cache Nov 4 23:51:11.111899 oslogin_cache_refresh[1610]: Refreshing passwd entry cache Nov 4 23:51:11.112612 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:51:11.115811 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:51:11.123230 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Failure getting users, quitting Nov 4 23:51:11.123230 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:51:11.123212 oslogin_cache_refresh[1610]: Failure getting users, quitting Nov 4 23:51:11.123361 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Refreshing group entry cache Nov 4 23:51:11.123236 oslogin_cache_refresh[1610]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:51:11.123302 oslogin_cache_refresh[1610]: Refreshing group entry cache Nov 4 23:51:11.124185 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:51:11.125788 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:51:11.126296 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:51:11.127273 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:51:11.131114 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:51:11.136052 extend-filesystems[1609]: Found /dev/vda6 Nov 4 23:51:11.139048 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:51:11.141764 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:51:11.142095 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:51:11.143136 extend-filesystems[1609]: Found /dev/vda9 Nov 4 23:51:11.144956 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Failure getting groups, quitting Nov 4 23:51:11.144956 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:51:11.144947 oslogin_cache_refresh[1610]: Failure getting groups, quitting Nov 4 23:51:11.144963 oslogin_cache_refresh[1610]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:51:11.145316 extend-filesystems[1609]: Checking size of /dev/vda9 Nov 4 23:51:11.146899 update_engine[1625]: I20251104 23:51:11.146823 1625 main.cc:92] Flatcar Update Engine starting Nov 4 23:51:11.148056 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:51:11.148343 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:51:11.150185 jq[1627]: true Nov 4 23:51:11.150774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:51:11.151029 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:51:11.154104 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:51:11.154360 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:51:11.171573 tar[1631]: linux-amd64/LICENSE Nov 4 23:51:11.171945 tar[1631]: linux-amd64/helm Nov 4 23:51:11.172221 (ntainerd)[1638]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:51:11.183603 jq[1636]: true Nov 4 23:51:11.200019 dbus-daemon[1606]: [system] SELinux support is enabled Nov 4 23:51:11.200263 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:51:11.204482 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:51:11.204527 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:51:11.210475 update_engine[1625]: I20251104 23:51:11.208218 1625 update_check_scheduler.cc:74] Next update check in 2m22s Nov 4 23:51:11.208441 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:51:11.208467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:51:11.212438 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:51:11.216099 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:51:11.219044 extend-filesystems[1609]: Resized partition /dev/vda9 Nov 4 23:51:11.292583 extend-filesystems[1674]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:51:11.481327 systemd-logind[1623]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:51:11.481371 systemd-logind[1623]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:51:11.481820 systemd-logind[1623]: New seat seat0. Nov 4 23:51:11.483980 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:51:11.499568 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 23:51:11.580573 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 23:51:11.617599 extend-filesystems[1674]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:51:11.617599 extend-filesystems[1674]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 23:51:11.617599 extend-filesystems[1674]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 23:51:11.737010 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:51:11.718837 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:51:11.737220 extend-filesystems[1609]: Resized filesystem in /dev/vda9 Nov 4 23:51:11.719222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:51:11.724593 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:51:11.736492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:51:11.870848 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:51:11.916810 systemd-networkd[1532]: eth0: Gained IPv6LL Nov 4 23:51:11.925267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:51:11.941378 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:51:11.946637 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 23:51:11.957278 tar[1631]: linux-amd64/README.md Nov 4 23:51:11.955271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:11.969442 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:51:12.012797 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:51:12.028189 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 23:51:12.028675 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 23:51:12.032242 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:51:12.040416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:51:12.086895 containerd[1638]: time="2025-11-04T23:51:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:51:12.087818 containerd[1638]: time="2025-11-04T23:51:12.087744019Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:51:12.102638 containerd[1638]: time="2025-11-04T23:51:12.102551683Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.76µs" Nov 4 23:51:12.102638 containerd[1638]: time="2025-11-04T23:51:12.102597099Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:51:12.102638 containerd[1638]: time="2025-11-04T23:51:12.102621736Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:51:12.102913 containerd[1638]: time="2025-11-04T23:51:12.102876123Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:51:12.102913 containerd[1638]: time="2025-11-04T23:51:12.102906378Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:51:12.102974 containerd[1638]: time="2025-11-04T23:51:12.102949248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103080 containerd[1638]: time="2025-11-04T23:51:12.103045461Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103080 containerd[1638]: time="2025-11-04T23:51:12.103068998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103452 containerd[1638]: time="2025-11-04T23:51:12.103410617Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103452 containerd[1638]: time="2025-11-04T23:51:12.103438434Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103508 containerd[1638]: time="2025-11-04T23:51:12.103453741Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103508 containerd[1638]: time="2025-11-04T23:51:12.103467411Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:51:12.103663 containerd[1638]: time="2025-11-04T23:51:12.103630470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:51:12.104004 containerd[1638]: time="2025-11-04T23:51:12.103961325Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:51:12.104047 containerd[1638]: time="2025-11-04T23:51:12.104012581Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:51:12.104047 containerd[1638]: time="2025-11-04T23:51:12.104028385Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:51:12.104116 containerd[1638]: time="2025-11-04T23:51:12.104085150Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:51:12.104562 containerd[1638]: time="2025-11-04T23:51:12.104474389Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:51:12.106764 containerd[1638]: time="2025-11-04T23:51:12.106713218Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:51:12.108702 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:51:12.197486 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:51:12.202696 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:51:12.230001 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:51:12.230288 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:51:12.234438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:51:12.305163 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:51:12.309647 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:51:12.312836 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:51:12.315336 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:51:12.543064 containerd[1638]: time="2025-11-04T23:51:12.542925356Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:51:12.543225 containerd[1638]: time="2025-11-04T23:51:12.543099296Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:51:12.543225 containerd[1638]: time="2025-11-04T23:51:12.543134328Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:51:12.543225 containerd[1638]: time="2025-11-04T23:51:12.543149694Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:51:12.543225 containerd[1638]: time="2025-11-04T23:51:12.543176253Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:51:12.543225 containerd[1638]: time="2025-11-04T23:51:12.543193920Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:51:12.543225 containerd[1638]: time="2025-11-04T23:51:12.543219094Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:51:12.543393 containerd[1638]: time="2025-11-04T23:51:12.543243333Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:51:12.543393 containerd[1638]: time="2025-11-04T23:51:12.543261185Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:51:12.543393 containerd[1638]: time="2025-11-04T23:51:12.543275780Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:51:12.543393 containerd[1638]: time="2025-11-04T23:51:12.543288485Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:51:12.543393 containerd[1638]: time="2025-11-04T23:51:12.543308862Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:51:12.543654 containerd[1638]: time="2025-11-04T23:51:12.543618668Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:51:12.543654 containerd[1638]: time="2025-11-04T23:51:12.543652851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:51:12.543707 containerd[1638]: time="2025-11-04T23:51:12.543667125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:51:12.543707 containerd[1638]: time="2025-11-04T23:51:12.543680024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:51:12.543707 containerd[1638]: time="2025-11-04T23:51:12.543691315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:51:12.543707 containerd[1638]: time="2025-11-04T23:51:12.543704351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:51:12.543789 containerd[1638]: time="2025-11-04T23:51:12.543740104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:51:12.543789 containerd[1638]: time="2025-11-04T23:51:12.543753647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:51:12.543789 containerd[1638]: time="2025-11-04T23:51:12.543779874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:51:12.543848 containerd[1638]: time="2025-11-04T23:51:12.543795718Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:51:12.543848 containerd[1638]: time="2025-11-04T23:51:12.543811883Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:51:12.543972 containerd[1638]: time="2025-11-04T23:51:12.543944015Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:51:12.543972 containerd[1638]: time="2025-11-04T23:51:12.543962033Z" level=info msg="Start snapshots syncer" Nov 4 23:51:12.544044 containerd[1638]: time="2025-11-04T23:51:12.544026461Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:51:12.544514 containerd[1638]: time="2025-11-04T23:51:12.544401679Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:51:12.544653 containerd[1638]: time="2025-11-04T23:51:12.544551225Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:51:12.544653 containerd[1638]: time="2025-11-04T23:51:12.544630560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:51:12.544767 containerd[1638]: time="2025-11-04T23:51:12.544744771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:51:12.544791 containerd[1638]: time="2025-11-04T23:51:12.544767810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:51:12.544846 containerd[1638]: time="2025-11-04T23:51:12.544805562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:51:12.544846 containerd[1638]: time="2025-11-04T23:51:12.544817955Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:51:12.544897 containerd[1638]: time="2025-11-04T23:51:12.544846336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:51:12.544897 containerd[1638]: time="2025-11-04T23:51:12.544857423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:51:12.544897 containerd[1638]: time="2025-11-04T23:51:12.544868820Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:51:12.544897 containerd[1638]: time="2025-11-04T23:51:12.544891781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:51:12.544963 containerd[1638]: time="2025-11-04T23:51:12.544906075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:51:12.544963 containerd[1638]: time="2025-11-04T23:51:12.544938016Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:51:12.545004 containerd[1638]: time="2025-11-04T23:51:12.544984250Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:51:12.545022 containerd[1638]: time="2025-11-04T23:51:12.545005194Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:51:12.545022 containerd[1638]: time="2025-11-04T23:51:12.545013919Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:51:12.545068 containerd[1638]: time="2025-11-04T23:51:12.545022880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:51:12.545068 containerd[1638]: time="2025-11-04T23:51:12.545034297Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:51:12.545118 containerd[1638]: time="2025-11-04T23:51:12.545068967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:51:12.545118 containerd[1638]: time="2025-11-04T23:51:12.545093431Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:51:12.545165 containerd[1638]: time="2025-11-04T23:51:12.545121530Z" level=info msg="runtime interface created" Nov 4 23:51:12.545165 containerd[1638]: time="2025-11-04T23:51:12.545128375Z" level=info msg="created NRI interface" Nov 4 23:51:12.545165 containerd[1638]: time="2025-11-04T23:51:12.545137657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:51:12.545165 containerd[1638]: time="2025-11-04T23:51:12.545149610Z" level=info msg="Connect containerd service" Nov 4 23:51:12.545252 containerd[1638]: time="2025-11-04T23:51:12.545174560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:51:12.546629 containerd[1638]: time="2025-11-04T23:51:12.546596751Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:51:12.756290 containerd[1638]: time="2025-11-04T23:51:12.756143473Z" level=info msg="Start subscribing containerd event" Nov 4 23:51:12.756290 containerd[1638]: time="2025-11-04T23:51:12.756199165Z" level=info msg="Start recovering state" Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756310335Z" level=info msg="Start event monitor" Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756323400Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756330020Z" level=info msg="Start streaming server" Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756344138Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756354277Z" level=info msg="runtime interface starting up..." Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756360752Z" level=info msg="starting plugins..." Nov 4 23:51:12.756558 containerd[1638]: time="2025-11-04T23:51:12.756377648Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:51:12.775570 containerd[1638]: time="2025-11-04T23:51:12.775422538Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:51:12.775570 containerd[1638]: time="2025-11-04T23:51:12.775547387Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:51:12.775804 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:51:12.776312 containerd[1638]: time="2025-11-04T23:51:12.776276548Z" level=info msg="containerd successfully booted in 0.690108s" Nov 4 23:51:13.542022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:13.544615 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:51:13.546633 systemd[1]: Startup finished in 2.689s (kernel) + 9.586s (initrd) + 6.979s (userspace) = 19.255s. Nov 4 23:51:13.593020 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:51:14.365698 kubelet[1746]: E1104 23:51:14.365594 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:51:14.369793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:51:14.369990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:51:14.370415 systemd[1]: kubelet.service: Consumed 1.926s CPU time, 268.2M memory peak. Nov 4 23:51:20.478063 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:51:20.479727 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:37832.service - OpenSSH per-connection server daemon (10.0.0.1:37832). Nov 4 23:51:20.576424 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 37832 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:20.579032 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:20.587519 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:51:20.588873 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:51:20.595216 systemd-logind[1623]: New session 1 of user core. Nov 4 23:51:20.624266 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:51:20.628691 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:51:20.648970 (systemd)[1764]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:51:20.651912 systemd-logind[1623]: New session c1 of user core. Nov 4 23:51:20.829072 systemd[1764]: Queued start job for default target default.target. Nov 4 23:51:20.849326 systemd[1764]: Created slice app.slice - User Application Slice. Nov 4 23:51:20.849359 systemd[1764]: Reached target paths.target - Paths. Nov 4 23:51:20.849408 systemd[1764]: Reached target timers.target - Timers. Nov 4 23:51:20.851294 systemd[1764]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:51:20.864582 systemd[1764]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:51:20.864738 systemd[1764]: Reached target sockets.target - Sockets. Nov 4 23:51:20.864785 systemd[1764]: Reached target basic.target - Basic System. Nov 4 23:51:20.864836 systemd[1764]: Reached target default.target - Main User Target. Nov 4 23:51:20.864872 systemd[1764]: Startup finished in 204ms. Nov 4 23:51:20.865383 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:51:20.867627 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:51:20.935047 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:37848.service - OpenSSH per-connection server daemon (10.0.0.1:37848). Nov 4 23:51:20.995452 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 37848 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:20.997487 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:21.002898 systemd-logind[1623]: New session 2 of user core. Nov 4 23:51:21.016859 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:51:21.071878 sshd[1778]: Connection closed by 10.0.0.1 port 37848 Nov 4 23:51:21.072370 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:21.081689 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:37848.service: Deactivated successfully. Nov 4 23:51:21.083769 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:51:21.084767 systemd-logind[1623]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:51:21.088042 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:37852.service - OpenSSH per-connection server daemon (10.0.0.1:37852). Nov 4 23:51:21.088898 systemd-logind[1623]: Removed session 2. Nov 4 23:51:21.150354 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 37852 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:21.152159 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:21.157482 systemd-logind[1623]: New session 3 of user core. Nov 4 23:51:21.171903 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:51:21.242829 sshd[1787]: Connection closed by 10.0.0.1 port 37852 Nov 4 23:51:21.243236 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:21.257717 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:37852.service: Deactivated successfully. Nov 4 23:51:21.259597 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:51:21.260457 systemd-logind[1623]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:51:21.263622 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:37868.service - OpenSSH per-connection server daemon (10.0.0.1:37868). Nov 4 23:51:21.264253 systemd-logind[1623]: Removed session 3. Nov 4 23:51:21.335070 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 37868 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:21.336776 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:21.342056 systemd-logind[1623]: New session 4 of user core. Nov 4 23:51:21.351696 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:51:21.407438 sshd[1796]: Connection closed by 10.0.0.1 port 37868 Nov 4 23:51:21.407866 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:21.421143 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:37868.service: Deactivated successfully. Nov 4 23:51:21.422864 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:51:21.423613 systemd-logind[1623]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:51:21.426256 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:37874.service - OpenSSH per-connection server daemon (10.0.0.1:37874). Nov 4 23:51:21.427120 systemd-logind[1623]: Removed session 4. Nov 4 23:51:21.487240 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 37874 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:21.488668 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:21.493580 systemd-logind[1623]: New session 5 of user core. Nov 4 23:51:21.503664 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:51:21.565432 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:51:21.565767 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:51:21.585273 sudo[1806]: pam_unix(sudo:session): session closed for user root Nov 4 23:51:21.587255 sshd[1805]: Connection closed by 10.0.0.1 port 37874 Nov 4 23:51:21.587661 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:21.602225 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:37874.service: Deactivated successfully. Nov 4 23:51:21.604024 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:51:21.604799 systemd-logind[1623]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:51:21.607474 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:37876.service - OpenSSH per-connection server daemon (10.0.0.1:37876). Nov 4 23:51:21.608376 systemd-logind[1623]: Removed session 5. Nov 4 23:51:21.672044 sshd[1812]: Accepted publickey for core from 10.0.0.1 port 37876 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:21.673475 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:21.678644 systemd-logind[1623]: New session 6 of user core. Nov 4 23:51:21.688895 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:51:21.746263 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:51:21.746703 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:51:21.864407 sudo[1817]: pam_unix(sudo:session): session closed for user root Nov 4 23:51:21.873219 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:51:21.873584 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:51:21.886415 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:51:21.947834 augenrules[1839]: No rules Nov 4 23:51:21.949971 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:51:21.950372 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:51:21.951662 sudo[1816]: pam_unix(sudo:session): session closed for user root Nov 4 23:51:21.954093 sshd[1815]: Connection closed by 10.0.0.1 port 37876 Nov 4 23:51:21.954468 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:21.966734 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:37876.service: Deactivated successfully. Nov 4 23:51:21.968666 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:51:21.969607 systemd-logind[1623]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:51:21.973407 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:37892.service - OpenSSH per-connection server daemon (10.0.0.1:37892). Nov 4 23:51:21.974327 systemd-logind[1623]: Removed session 6. Nov 4 23:51:22.039881 sshd[1848]: Accepted publickey for core from 10.0.0.1 port 37892 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:51:22.042077 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:22.047869 systemd-logind[1623]: New session 7 of user core. Nov 4 23:51:22.057925 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:51:22.115416 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:51:22.115900 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:51:23.282695 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:51:23.304905 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:51:23.798214 dockerd[1872]: time="2025-11-04T23:51:23.798143974Z" level=info msg="Starting up" Nov 4 23:51:23.799149 dockerd[1872]: time="2025-11-04T23:51:23.799112591Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:51:23.817363 dockerd[1872]: time="2025-11-04T23:51:23.817311351Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:51:23.910914 dockerd[1872]: time="2025-11-04T23:51:23.910836105Z" level=info msg="Loading containers: start." Nov 4 23:51:23.922569 kernel: Initializing XFRM netlink socket Nov 4 23:51:24.269939 systemd-networkd[1532]: docker0: Link UP Nov 4 23:51:24.274639 dockerd[1872]: time="2025-11-04T23:51:24.274593119Z" level=info msg="Loading containers: done." Nov 4 23:51:24.290293 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3691926700-merged.mount: Deactivated successfully. Nov 4 23:51:24.291787 dockerd[1872]: time="2025-11-04T23:51:24.291734458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:51:24.291878 dockerd[1872]: time="2025-11-04T23:51:24.291857441Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:51:24.291987 dockerd[1872]: time="2025-11-04T23:51:24.291962249Z" level=info msg="Initializing buildkit" Nov 4 23:51:24.324991 dockerd[1872]: time="2025-11-04T23:51:24.324928408Z" level=info msg="Completed buildkit initialization" Nov 4 23:51:24.332979 dockerd[1872]: time="2025-11-04T23:51:24.332894442Z" level=info msg="Daemon has completed initialization" Nov 4 23:51:24.333087 dockerd[1872]: time="2025-11-04T23:51:24.333006844Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:51:24.333297 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:51:24.557054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:51:24.559089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:24.883926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:24.914086 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:51:24.986356 kubelet[2097]: E1104 23:51:24.986239 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:51:24.993290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:51:24.993497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:51:24.993937 systemd[1]: kubelet.service: Consumed 367ms CPU time, 110.9M memory peak. Nov 4 23:51:25.255753 containerd[1638]: time="2025-11-04T23:51:25.255580598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:51:26.868897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032935144.mount: Deactivated successfully. Nov 4 23:51:28.248200 containerd[1638]: time="2025-11-04T23:51:28.246797992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:28.250198 containerd[1638]: time="2025-11-04T23:51:28.250115490Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 4 23:51:28.252303 containerd[1638]: time="2025-11-04T23:51:28.252231785Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:28.256237 containerd[1638]: time="2025-11-04T23:51:28.256156096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:28.257110 containerd[1638]: time="2025-11-04T23:51:28.257037503Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.001371565s" Nov 4 23:51:28.257110 containerd[1638]: time="2025-11-04T23:51:28.257106912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:51:28.258428 containerd[1638]: time="2025-11-04T23:51:28.258385695Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:51:30.081260 containerd[1638]: time="2025-11-04T23:51:30.081183408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:30.082373 containerd[1638]: time="2025-11-04T23:51:30.082324878Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 4 23:51:30.083614 containerd[1638]: time="2025-11-04T23:51:30.083574349Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:30.086273 containerd[1638]: time="2025-11-04T23:51:30.086224417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:30.087130 containerd[1638]: time="2025-11-04T23:51:30.087101523Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.828682278s" Nov 4 23:51:30.087193 containerd[1638]: time="2025-11-04T23:51:30.087133465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:51:30.087985 containerd[1638]: time="2025-11-04T23:51:30.087731856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:51:31.884377 containerd[1638]: time="2025-11-04T23:51:31.884292153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:31.885110 containerd[1638]: time="2025-11-04T23:51:31.885030691Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 4 23:51:31.886263 containerd[1638]: time="2025-11-04T23:51:31.886222286Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:31.889502 containerd[1638]: time="2025-11-04T23:51:31.889427491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:31.890677 containerd[1638]: time="2025-11-04T23:51:31.890617287Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.802842031s" Nov 4 23:51:31.890677 containerd[1638]: time="2025-11-04T23:51:31.890665265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:51:31.891303 containerd[1638]: time="2025-11-04T23:51:31.891269805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:51:33.222432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840073717.mount: Deactivated successfully. Nov 4 23:51:33.982436 containerd[1638]: time="2025-11-04T23:51:33.982335785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:33.983402 containerd[1638]: time="2025-11-04T23:51:33.983357191Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 4 23:51:33.984937 containerd[1638]: time="2025-11-04T23:51:33.984852836Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:33.987290 containerd[1638]: time="2025-11-04T23:51:33.987231282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:33.987876 containerd[1638]: time="2025-11-04T23:51:33.987821499Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.096508499s" Nov 4 23:51:33.987876 containerd[1638]: time="2025-11-04T23:51:33.987860540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:51:33.989355 containerd[1638]: time="2025-11-04T23:51:33.989300811Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:51:35.057074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:51:35.059136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:35.293438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:35.299072 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:51:36.132414 kubelet[2189]: E1104 23:51:36.132296 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:51:36.138157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:51:36.138388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:51:36.138875 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.7M memory peak. Nov 4 23:51:36.806028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366096803.mount: Deactivated successfully. Nov 4 23:51:38.347749 containerd[1638]: time="2025-11-04T23:51:38.347640569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:38.348559 containerd[1638]: time="2025-11-04T23:51:38.348417009Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 4 23:51:38.349815 containerd[1638]: time="2025-11-04T23:51:38.349782822Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:38.352947 containerd[1638]: time="2025-11-04T23:51:38.352861382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:38.353914 containerd[1638]: time="2025-11-04T23:51:38.353854296Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.364496381s" Nov 4 23:51:38.354073 containerd[1638]: time="2025-11-04T23:51:38.353924029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:51:38.354945 containerd[1638]: time="2025-11-04T23:51:38.354916364Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:51:39.120696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424611812.mount: Deactivated successfully. Nov 4 23:51:39.130784 containerd[1638]: time="2025-11-04T23:51:39.130673066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:51:39.131800 containerd[1638]: time="2025-11-04T23:51:39.131700335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:51:39.132771 containerd[1638]: time="2025-11-04T23:51:39.132687276Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:51:39.135266 containerd[1638]: time="2025-11-04T23:51:39.135209919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:51:39.136039 containerd[1638]: time="2025-11-04T23:51:39.135982565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 781.028921ms" Nov 4 23:51:39.136039 containerd[1638]: time="2025-11-04T23:51:39.136026725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:51:39.136802 containerd[1638]: time="2025-11-04T23:51:39.136754470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:51:40.082034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034853319.mount: Deactivated successfully. Nov 4 23:51:42.692471 containerd[1638]: time="2025-11-04T23:51:42.692366489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:42.693418 containerd[1638]: time="2025-11-04T23:51:42.693145569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 4 23:51:42.694758 containerd[1638]: time="2025-11-04T23:51:42.694723999Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:42.698550 containerd[1638]: time="2025-11-04T23:51:42.698489452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:51:42.699668 containerd[1638]: time="2025-11-04T23:51:42.699630239Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.562835551s" Nov 4 23:51:42.699739 containerd[1638]: time="2025-11-04T23:51:42.699672117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:51:46.306913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:51:46.309076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:46.571169 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:51:46.571339 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:51:46.571779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:46.572090 systemd[1]: kubelet.service: Consumed 208ms CPU time, 87.4M memory peak. Nov 4 23:51:46.575835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:46.609367 systemd[1]: Reload requested from client PID 2344 ('systemctl') (unit session-7.scope)... Nov 4 23:51:46.609394 systemd[1]: Reloading... Nov 4 23:51:46.718577 zram_generator::config[2390]: No configuration found. Nov 4 23:51:47.432689 systemd[1]: Reloading finished in 822 ms. Nov 4 23:51:47.524731 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:51:47.524875 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:51:47.525240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:47.525305 systemd[1]: kubelet.service: Consumed 204ms CPU time, 98.4M memory peak. Nov 4 23:51:47.527592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:48.846220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:48.869044 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:51:48.955262 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:51:48.955262 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:51:48.955262 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:51:48.955783 kubelet[2436]: I1104 23:51:48.955293 2436 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:51:49.456857 kubelet[2436]: I1104 23:51:49.456784 2436 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:51:49.456857 kubelet[2436]: I1104 23:51:49.456818 2436 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:51:49.457104 kubelet[2436]: I1104 23:51:49.457082 2436 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:51:49.492239 kubelet[2436]: E1104 23:51:49.492157 2436 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:51:49.492404 kubelet[2436]: I1104 23:51:49.492258 2436 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:51:49.500586 kubelet[2436]: I1104 23:51:49.499582 2436 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:51:49.505501 kubelet[2436]: I1104 23:51:49.505472 2436 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:51:49.505784 kubelet[2436]: I1104 23:51:49.505739 2436 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:51:49.505970 kubelet[2436]: I1104 23:51:49.505775 2436 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:51:49.506146 kubelet[2436]: I1104 23:51:49.505979 2436 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:51:49.506146 kubelet[2436]: I1104 23:51:49.505990 2436 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:51:49.507087 kubelet[2436]: I1104 23:51:49.507051 2436 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:51:49.508883 kubelet[2436]: I1104 23:51:49.508842 2436 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:51:49.508883 kubelet[2436]: I1104 23:51:49.508861 2436 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:51:49.508968 kubelet[2436]: I1104 23:51:49.508895 2436 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:51:49.508968 kubelet[2436]: I1104 23:51:49.508916 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:51:49.514396 kubelet[2436]: I1104 23:51:49.514333 2436 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:51:49.515748 kubelet[2436]: I1104 23:51:49.514924 2436 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:51:49.515748 kubelet[2436]: E1104 23:51:49.515476 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:51:49.515748 kubelet[2436]: E1104 23:51:49.515597 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:51:49.516093 kubelet[2436]: W1104 23:51:49.516068 2436 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:51:49.520371 kubelet[2436]: I1104 23:51:49.520335 2436 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:51:49.520759 kubelet[2436]: I1104 23:51:49.520740 2436 server.go:1289] "Started kubelet" Nov 4 23:51:49.521874 kubelet[2436]: I1104 23:51:49.521804 2436 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:51:49.522926 kubelet[2436]: I1104 23:51:49.522828 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:51:49.523453 kubelet[2436]: I1104 23:51:49.523383 2436 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:51:49.528503 kubelet[2436]: I1104 23:51:49.527709 2436 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:51:49.528503 kubelet[2436]: I1104 23:51:49.527965 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:51:49.528503 kubelet[2436]: I1104 23:51:49.528392 2436 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:51:49.528812 kubelet[2436]: E1104 23:51:49.528597 2436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:51:49.530354 kubelet[2436]: I1104 23:51:49.529526 2436 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:51:49.530354 kubelet[2436]: I1104 23:51:49.529651 2436 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:51:49.530693 kubelet[2436]: E1104 23:51:49.530657 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:51:49.530795 kubelet[2436]: E1104 23:51:49.530698 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Nov 4 23:51:49.531328 kubelet[2436]: I1104 23:51:49.531208 2436 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:51:49.531467 kubelet[2436]: I1104 23:51:49.531425 2436 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:51:49.532590 kubelet[2436]: I1104 23:51:49.532495 2436 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:51:49.533341 kubelet[2436]: I1104 23:51:49.533303 2436 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:51:49.533861 kubelet[2436]: E1104 23:51:49.532786 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f2cb608ab582 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 23:51:49.52069261 +0000 UTC m=+0.646404262,LastTimestamp:2025-11-04 23:51:49.52069261 +0000 UTC m=+0.646404262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 23:51:49.550486 kubelet[2436]: I1104 23:51:49.550450 2436 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:51:49.550486 kubelet[2436]: I1104 23:51:49.550467 2436 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:51:49.550486 kubelet[2436]: I1104 23:51:49.550487 2436 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:51:49.622514 kubelet[2436]: E1104 23:51:49.622337 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f2cb608ab582 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 23:51:49.52069261 +0000 UTC m=+0.646404262,LastTimestamp:2025-11-04 23:51:49.52069261 +0000 UTC m=+0.646404262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 23:51:49.629532 kubelet[2436]: E1104 23:51:49.629447 2436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:51:49.730078 kubelet[2436]: E1104 23:51:49.729900 2436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:51:49.731719 kubelet[2436]: E1104 23:51:49.731661 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Nov 4 23:51:49.830993 kubelet[2436]: E1104 23:51:49.830876 2436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:51:49.889043 kubelet[2436]: I1104 23:51:49.888978 2436 policy_none.go:49] "None policy: Start" Nov 4 23:51:49.889043 kubelet[2436]: I1104 23:51:49.889014 2436 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:51:49.889043 kubelet[2436]: I1104 23:51:49.889029 2436 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:51:49.892725 kubelet[2436]: I1104 23:51:49.892667 2436 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:51:49.895712 kubelet[2436]: I1104 23:51:49.895680 2436 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:51:49.895832 kubelet[2436]: I1104 23:51:49.895722 2436 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:51:49.895832 kubelet[2436]: I1104 23:51:49.895746 2436 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:51:49.895832 kubelet[2436]: I1104 23:51:49.895757 2436 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:51:49.895932 kubelet[2436]: E1104 23:51:49.895816 2436 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:51:49.896506 kubelet[2436]: E1104 23:51:49.896312 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:51:49.899045 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:51:49.920593 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:51:49.924651 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:51:49.932037 kubelet[2436]: E1104 23:51:49.931997 2436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:51:49.937041 kubelet[2436]: E1104 23:51:49.936890 2436 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:51:49.937213 kubelet[2436]: I1104 23:51:49.937197 2436 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:51:49.937287 kubelet[2436]: I1104 23:51:49.937223 2436 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:51:49.938085 kubelet[2436]: I1104 23:51:49.938038 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:51:49.938720 kubelet[2436]: E1104 23:51:49.938670 2436 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:51:49.938720 kubelet[2436]: E1104 23:51:49.938721 2436 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 23:51:50.033143 kubelet[2436]: I1104 23:51:50.032946 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c28fb1e99325ba8dfded937e60ac72a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c28fb1e99325ba8dfded937e60ac72a\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:50.033143 kubelet[2436]: I1104 23:51:50.033013 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c28fb1e99325ba8dfded937e60ac72a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c28fb1e99325ba8dfded937e60ac72a\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:50.033143 kubelet[2436]: I1104 23:51:50.033040 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c28fb1e99325ba8dfded937e60ac72a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c28fb1e99325ba8dfded937e60ac72a\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:50.040199 kubelet[2436]: I1104 23:51:50.040178 2436 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:51:50.040575 kubelet[2436]: E1104 23:51:50.040520 2436 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Nov 4 23:51:50.087530 systemd[1]: Created slice kubepods-burstable-pod8c28fb1e99325ba8dfded937e60ac72a.slice - libcontainer container kubepods-burstable-pod8c28fb1e99325ba8dfded937e60ac72a.slice. Nov 4 23:51:50.108458 kubelet[2436]: E1104 23:51:50.108428 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:50.132254 kubelet[2436]: E1104 23:51:50.132205 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Nov 4 23:51:50.133429 kubelet[2436]: I1104 23:51:50.133361 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:50.133429 kubelet[2436]: I1104 23:51:50.133417 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:50.133621 kubelet[2436]: I1104 23:51:50.133445 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:50.133621 kubelet[2436]: I1104 23:51:50.133467 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:50.133621 kubelet[2436]: I1104 23:51:50.133596 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:50.133736 kubelet[2436]: I1104 23:51:50.133628 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:50.135434 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 23:51:50.137636 kubelet[2436]: E1104 23:51:50.137598 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:50.156403 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 23:51:50.158797 kubelet[2436]: E1104 23:51:50.158761 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:50.242801 kubelet[2436]: I1104 23:51:50.242747 2436 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:51:50.243304 kubelet[2436]: E1104 23:51:50.243246 2436 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Nov 4 23:51:50.409678 kubelet[2436]: E1104 23:51:50.409607 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:50.410735 containerd[1638]: time="2025-11-04T23:51:50.410660081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c28fb1e99325ba8dfded937e60ac72a,Namespace:kube-system,Attempt:0,}" Nov 4 23:51:50.438833 kubelet[2436]: E1104 23:51:50.438733 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:50.439796 containerd[1638]: time="2025-11-04T23:51:50.439713891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 23:51:50.445713 containerd[1638]: time="2025-11-04T23:51:50.445631402Z" level=info msg="connecting to shim abeefb2d879adc91d70ec672f57f407639c997357f1ab12e2db5ec74ccbb6c12" address="unix:///run/containerd/s/6c55f695d65571d56e6878f3c88e80423ba0c29edb3bd67ef5adde24b42fb28f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:51:50.460313 kubelet[2436]: E1104 23:51:50.460257 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:50.461383 containerd[1638]: time="2025-11-04T23:51:50.461199105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 23:51:50.494529 kubelet[2436]: E1104 23:51:50.494460 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:51:50.632204 kubelet[2436]: E1104 23:51:50.632135 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:51:50.645456 kubelet[2436]: I1104 23:51:50.645412 2436 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:51:50.645982 kubelet[2436]: E1104 23:51:50.645954 2436 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Nov 4 23:51:50.736892 containerd[1638]: time="2025-11-04T23:51:50.736434837Z" level=info msg="connecting to shim c606f62dfa7bb744275992819a88ae343cf98178b502856e6c51063c15f09726" address="unix:///run/containerd/s/50c56e8dee164cbd22d351a9b63d54ec76000ba4899cd0a2f6cd8f514ca74225" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:51:50.736892 containerd[1638]: time="2025-11-04T23:51:50.736511647Z" level=info msg="connecting to shim 8aa7dd79434ddbb572dd00e5450082e29662751f1a2b11e8b7b66876ef6cae51" address="unix:///run/containerd/s/6581e0d51d340126c31641e8f96d526ebd08ee93c7398f013de23f34f3839ee4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:51:50.738262 systemd[1]: Started cri-containerd-abeefb2d879adc91d70ec672f57f407639c997357f1ab12e2db5ec74ccbb6c12.scope - libcontainer container abeefb2d879adc91d70ec672f57f407639c997357f1ab12e2db5ec74ccbb6c12. Nov 4 23:51:50.791644 systemd[1]: Started cri-containerd-8aa7dd79434ddbb572dd00e5450082e29662751f1a2b11e8b7b66876ef6cae51.scope - libcontainer container 8aa7dd79434ddbb572dd00e5450082e29662751f1a2b11e8b7b66876ef6cae51. Nov 4 23:51:50.799941 systemd[1]: Started cri-containerd-c606f62dfa7bb744275992819a88ae343cf98178b502856e6c51063c15f09726.scope - libcontainer container c606f62dfa7bb744275992819a88ae343cf98178b502856e6c51063c15f09726. Nov 4 23:51:50.888472 containerd[1638]: time="2025-11-04T23:51:50.888397177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c28fb1e99325ba8dfded937e60ac72a,Namespace:kube-system,Attempt:0,} returns sandbox id \"abeefb2d879adc91d70ec672f57f407639c997357f1ab12e2db5ec74ccbb6c12\"" Nov 4 23:51:50.890688 kubelet[2436]: E1104 23:51:50.890644 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:50.892603 containerd[1638]: time="2025-11-04T23:51:50.892506335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c606f62dfa7bb744275992819a88ae343cf98178b502856e6c51063c15f09726\"" Nov 4 23:51:50.893437 kubelet[2436]: E1104 23:51:50.893403 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:50.894443 containerd[1638]: time="2025-11-04T23:51:50.894387951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aa7dd79434ddbb572dd00e5450082e29662751f1a2b11e8b7b66876ef6cae51\"" Nov 4 23:51:50.895326 kubelet[2436]: E1104 23:51:50.895281 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:50.896741 containerd[1638]: time="2025-11-04T23:51:50.896691392Z" level=info msg="CreateContainer within sandbox \"abeefb2d879adc91d70ec672f57f407639c997357f1ab12e2db5ec74ccbb6c12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:51:50.900858 containerd[1638]: time="2025-11-04T23:51:50.900770831Z" level=info msg="CreateContainer within sandbox \"c606f62dfa7bb744275992819a88ae343cf98178b502856e6c51063c15f09726\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:51:50.913916 containerd[1638]: time="2025-11-04T23:51:50.913846235Z" level=info msg="CreateContainer within sandbox \"8aa7dd79434ddbb572dd00e5450082e29662751f1a2b11e8b7b66876ef6cae51\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:51:50.917749 containerd[1638]: time="2025-11-04T23:51:50.917701788Z" level=info msg="Container a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:51:50.933952 kubelet[2436]: E1104 23:51:50.933900 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Nov 4 23:51:50.935172 containerd[1638]: time="2025-11-04T23:51:50.935120628Z" level=info msg="Container ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:51:50.935886 containerd[1638]: time="2025-11-04T23:51:50.935843260Z" level=info msg="CreateContainer within sandbox \"abeefb2d879adc91d70ec672f57f407639c997357f1ab12e2db5ec74ccbb6c12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75\"" Nov 4 23:51:50.936599 containerd[1638]: time="2025-11-04T23:51:50.936567645Z" level=info msg="StartContainer for \"a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75\"" Nov 4 23:51:50.938145 containerd[1638]: time="2025-11-04T23:51:50.938074088Z" level=info msg="connecting to shim a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75" address="unix:///run/containerd/s/6c55f695d65571d56e6878f3c88e80423ba0c29edb3bd67ef5adde24b42fb28f" protocol=ttrpc version=3 Nov 4 23:51:50.939259 containerd[1638]: time="2025-11-04T23:51:50.938638772Z" level=info msg="Container a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:51:50.949636 containerd[1638]: time="2025-11-04T23:51:50.949531742Z" level=info msg="CreateContainer within sandbox \"c606f62dfa7bb744275992819a88ae343cf98178b502856e6c51063c15f09726\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930\"" Nov 4 23:51:50.950912 containerd[1638]: time="2025-11-04T23:51:50.950848505Z" level=info msg="StartContainer for \"ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930\"" Nov 4 23:51:50.953171 containerd[1638]: time="2025-11-04T23:51:50.953121506Z" level=info msg="connecting to shim ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930" address="unix:///run/containerd/s/50c56e8dee164cbd22d351a9b63d54ec76000ba4899cd0a2f6cd8f514ca74225" protocol=ttrpc version=3 Nov 4 23:51:50.956516 containerd[1638]: time="2025-11-04T23:51:50.956253424Z" level=info msg="CreateContainer within sandbox \"8aa7dd79434ddbb572dd00e5450082e29662751f1a2b11e8b7b66876ef6cae51\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7\"" Nov 4 23:51:50.959874 containerd[1638]: time="2025-11-04T23:51:50.959812307Z" level=info msg="StartContainer for \"a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7\"" Nov 4 23:51:50.962913 containerd[1638]: time="2025-11-04T23:51:50.962859039Z" level=info msg="connecting to shim a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7" address="unix:///run/containerd/s/6581e0d51d340126c31641e8f96d526ebd08ee93c7398f013de23f34f3839ee4" protocol=ttrpc version=3 Nov 4 23:51:50.964862 systemd[1]: Started cri-containerd-a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75.scope - libcontainer container a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75. Nov 4 23:51:50.993872 systemd[1]: Started cri-containerd-ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930.scope - libcontainer container ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930. Nov 4 23:51:51.011958 systemd[1]: Started cri-containerd-a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7.scope - libcontainer container a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7. Nov 4 23:51:51.050573 containerd[1638]: time="2025-11-04T23:51:51.050407663Z" level=info msg="StartContainer for \"a6d797466a1196c99290f00a98ae9ab535701c3d7b1abd4e9c03dc237a3fce75\" returns successfully" Nov 4 23:51:51.074258 kubelet[2436]: E1104 23:51:51.074181 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:51:51.099095 containerd[1638]: time="2025-11-04T23:51:51.098931091Z" level=info msg="StartContainer for \"ecb32ed7409d00d8ee5eb2d80243e7da4956f31a536bba69f0ccbfab3cc03930\" returns successfully" Nov 4 23:51:51.111774 containerd[1638]: time="2025-11-04T23:51:51.111676709Z" level=info msg="StartContainer for \"a55217c2fc16fb4092cc1bb6d0eb2f556a2042b38db176b451fff81438e614e7\" returns successfully" Nov 4 23:51:51.449049 kubelet[2436]: I1104 23:51:51.448843 2436 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:51:51.923797 kubelet[2436]: E1104 23:51:51.923751 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:51.924347 kubelet[2436]: E1104 23:51:51.924320 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:51.928095 kubelet[2436]: E1104 23:51:51.928063 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:51.928251 kubelet[2436]: E1104 23:51:51.928223 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:51.931224 kubelet[2436]: E1104 23:51:51.931195 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:51.931357 kubelet[2436]: E1104 23:51:51.931333 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:52.848104 kubelet[2436]: E1104 23:51:52.848047 2436 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 23:51:52.935855 kubelet[2436]: E1104 23:51:52.935804 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:52.936040 kubelet[2436]: E1104 23:51:52.935957 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:52.936121 kubelet[2436]: E1104 23:51:52.936092 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:52.936221 kubelet[2436]: E1104 23:51:52.936200 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:51:52.936268 kubelet[2436]: E1104 23:51:52.936257 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:52.936312 kubelet[2436]: E1104 23:51:52.936294 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:52.957832 kubelet[2436]: I1104 23:51:52.957767 2436 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:51:52.957832 kubelet[2436]: E1104 23:51:52.957819 2436 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 23:51:53.229822 kubelet[2436]: I1104 23:51:53.229594 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:53.266705 kubelet[2436]: E1104 23:51:53.266615 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:53.267008 kubelet[2436]: I1104 23:51:53.266962 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:53.273175 kubelet[2436]: E1104 23:51:53.273092 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:53.273175 kubelet[2436]: I1104 23:51:53.273169 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:53.277410 kubelet[2436]: E1104 23:51:53.277335 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:53.516297 kubelet[2436]: I1104 23:51:53.516075 2436 apiserver.go:52] "Watching apiserver" Nov 4 23:51:53.529730 kubelet[2436]: I1104 23:51:53.529662 2436 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:51:53.936707 kubelet[2436]: I1104 23:51:53.936648 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:53.937219 kubelet[2436]: I1104 23:51:53.936737 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:53.939549 kubelet[2436]: E1104 23:51:53.939509 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:53.939630 kubelet[2436]: E1104 23:51:53.939510 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:53.939720 kubelet[2436]: E1104 23:51:53.939697 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:53.939769 kubelet[2436]: E1104 23:51:53.939736 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:54.937394 kubelet[2436]: I1104 23:51:54.937341 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:54.943292 kubelet[2436]: E1104 23:51:54.943239 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:55.853145 systemd[1]: Reload requested from client PID 2727 ('systemctl') (unit session-7.scope)... Nov 4 23:51:55.853164 systemd[1]: Reloading... Nov 4 23:51:55.939574 zram_generator::config[2769]: No configuration found. Nov 4 23:51:55.940932 kubelet[2436]: E1104 23:51:55.940847 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:56.226634 systemd[1]: Reloading finished in 372 ms. Nov 4 23:51:56.265453 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:56.289361 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:51:56.289810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:56.289889 systemd[1]: kubelet.service: Consumed 1.274s CPU time, 134.3M memory peak. Nov 4 23:51:56.291773 update_engine[1625]: I20251104 23:51:56.291638 1625 update_attempter.cc:509] Updating boot flags... Nov 4 23:51:56.292417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:56.599548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:51:56.613059 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:51:56.688158 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:51:56.688158 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:51:56.688158 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:51:56.688158 kubelet[2832]: I1104 23:51:56.687765 2832 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:51:56.696439 kubelet[2832]: I1104 23:51:56.696349 2832 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:51:56.696439 kubelet[2832]: I1104 23:51:56.696413 2832 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:51:56.696865 kubelet[2832]: I1104 23:51:56.696714 2832 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:51:56.698296 kubelet[2832]: I1104 23:51:56.698243 2832 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:51:56.702261 kubelet[2832]: I1104 23:51:56.702188 2832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:51:56.710848 kubelet[2832]: I1104 23:51:56.710788 2832 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:51:56.720203 kubelet[2832]: I1104 23:51:56.720139 2832 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:51:56.720708 kubelet[2832]: I1104 23:51:56.720642 2832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:51:56.721105 kubelet[2832]: I1104 23:51:56.720701 2832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:51:56.721240 kubelet[2832]: I1104 23:51:56.721114 2832 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:51:56.721240 kubelet[2832]: I1104 23:51:56.721133 2832 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:51:56.721240 kubelet[2832]: I1104 23:51:56.721234 2832 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:51:56.722054 kubelet[2832]: I1104 23:51:56.721489 2832 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:51:56.722054 kubelet[2832]: I1104 23:51:56.721529 2832 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:51:56.722054 kubelet[2832]: I1104 23:51:56.721576 2832 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:51:56.722054 kubelet[2832]: I1104 23:51:56.721590 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:51:56.724514 kubelet[2832]: I1104 23:51:56.724482 2832 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:51:56.725106 kubelet[2832]: I1104 23:51:56.725064 2832 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:51:56.731498 kubelet[2832]: I1104 23:51:56.731445 2832 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:51:56.731498 kubelet[2832]: I1104 23:51:56.731501 2832 server.go:1289] "Started kubelet" Nov 4 23:51:56.732493 kubelet[2832]: I1104 23:51:56.731638 2832 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:51:56.732493 kubelet[2832]: I1104 23:51:56.731753 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:51:56.732493 kubelet[2832]: I1104 23:51:56.732117 2832 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:51:56.735685 kubelet[2832]: I1104 23:51:56.735662 2832 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:51:56.739482 kubelet[2832]: E1104 23:51:56.739294 2832 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:51:56.742212 kubelet[2832]: I1104 23:51:56.742176 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:51:56.743440 kubelet[2832]: I1104 23:51:56.743413 2832 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:51:56.743718 kubelet[2832]: I1104 23:51:56.743701 2832 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:51:56.743944 kubelet[2832]: I1104 23:51:56.743918 2832 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:51:56.745258 kubelet[2832]: I1104 23:51:56.745220 2832 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:51:56.746371 kubelet[2832]: I1104 23:51:56.746352 2832 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:51:56.746529 kubelet[2832]: I1104 23:51:56.746500 2832 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:51:56.749998 kubelet[2832]: I1104 23:51:56.749959 2832 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:51:56.762313 kubelet[2832]: I1104 23:51:56.762231 2832 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:51:56.764212 kubelet[2832]: I1104 23:51:56.764179 2832 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:51:56.764212 kubelet[2832]: I1104 23:51:56.764204 2832 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:51:56.764320 kubelet[2832]: I1104 23:51:56.764232 2832 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:51:56.764320 kubelet[2832]: I1104 23:51:56.764240 2832 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:51:56.764320 kubelet[2832]: E1104 23:51:56.764282 2832 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:51:56.789219 kubelet[2832]: I1104 23:51:56.789182 2832 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789469 2832 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789500 2832 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789708 2832 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789722 2832 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789746 2832 policy_none.go:49] "None policy: Start" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789765 2832 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789787 2832 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:51:56.790652 kubelet[2832]: I1104 23:51:56.789936 2832 state_mem.go:75] "Updated machine memory state" Nov 4 23:51:56.795405 kubelet[2832]: E1104 23:51:56.795360 2832 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:51:56.795654 kubelet[2832]: I1104 23:51:56.795609 2832 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:51:56.795717 kubelet[2832]: I1104 23:51:56.795636 2832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:51:56.796595 kubelet[2832]: I1104 23:51:56.796289 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:51:56.797814 kubelet[2832]: E1104 23:51:56.797799 2832 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:51:56.853414 sudo[2873]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 23:51:56.853966 sudo[2873]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 23:51:56.866728 kubelet[2832]: I1104 23:51:56.866649 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:56.868578 kubelet[2832]: I1104 23:51:56.866946 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:56.868578 kubelet[2832]: I1104 23:51:56.867509 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:56.878850 kubelet[2832]: E1104 23:51:56.877978 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:56.907469 kubelet[2832]: I1104 23:51:56.907392 2832 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:51:56.922153 kubelet[2832]: I1104 23:51:56.922089 2832 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 23:51:56.922334 kubelet[2832]: I1104 23:51:56.922205 2832 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:51:56.945075 kubelet[2832]: I1104 23:51:56.944998 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:56.945075 kubelet[2832]: I1104 23:51:56.945062 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:56.945075 kubelet[2832]: I1104 23:51:56.945088 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c28fb1e99325ba8dfded937e60ac72a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c28fb1e99325ba8dfded937e60ac72a\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:56.945331 kubelet[2832]: I1104 23:51:56.945110 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:56.945331 kubelet[2832]: I1104 23:51:56.945146 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:56.945331 kubelet[2832]: I1104 23:51:56.945167 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:56.945331 kubelet[2832]: I1104 23:51:56.945186 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:56.945331 kubelet[2832]: I1104 23:51:56.945208 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c28fb1e99325ba8dfded937e60ac72a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c28fb1e99325ba8dfded937e60ac72a\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:56.945481 kubelet[2832]: I1104 23:51:56.945226 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c28fb1e99325ba8dfded937e60ac72a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c28fb1e99325ba8dfded937e60ac72a\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:57.177786 kubelet[2832]: E1104 23:51:57.177379 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:57.178139 kubelet[2832]: E1104 23:51:57.178064 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:57.178726 kubelet[2832]: E1104 23:51:57.178697 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:57.297305 sudo[2873]: pam_unix(sudo:session): session closed for user root Nov 4 23:51:57.723795 kubelet[2832]: I1104 23:51:57.723730 2832 apiserver.go:52] "Watching apiserver" Nov 4 23:51:57.744675 kubelet[2832]: I1104 23:51:57.744605 2832 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:51:57.779989 kubelet[2832]: I1104 23:51:57.779797 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:57.779989 kubelet[2832]: I1104 23:51:57.779836 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:57.779989 kubelet[2832]: I1104 23:51:57.779986 2832 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:57.976433 kubelet[2832]: E1104 23:51:57.976197 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 23:51:57.977346 kubelet[2832]: E1104 23:51:57.976519 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:51:57.977695 kubelet[2832]: E1104 23:51:57.977621 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:57.978638 kubelet[2832]: E1104 23:51:57.977629 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:57.978863 kubelet[2832]: E1104 23:51:57.978827 2832 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 23:51:57.979009 kubelet[2832]: E1104 23:51:57.978987 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:58.122710 kubelet[2832]: I1104 23:51:58.122400 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.122374727 podStartE2EDuration="2.122374727s" podCreationTimestamp="2025-11-04 23:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:51:58.077513734 +0000 UTC m=+1.451587662" watchObservedRunningTime="2025-11-04 23:51:58.122374727 +0000 UTC m=+1.496448685" Nov 4 23:51:58.126791 kubelet[2832]: I1104 23:51:58.126688 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.12640831 podStartE2EDuration="4.12640831s" podCreationTimestamp="2025-11-04 23:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:51:58.122787452 +0000 UTC m=+1.496861400" watchObservedRunningTime="2025-11-04 23:51:58.12640831 +0000 UTC m=+1.500482268" Nov 4 23:51:58.175561 kubelet[2832]: I1104 23:51:58.175484 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.175462374 podStartE2EDuration="2.175462374s" podCreationTimestamp="2025-11-04 23:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:51:58.137001384 +0000 UTC m=+1.511075312" watchObservedRunningTime="2025-11-04 23:51:58.175462374 +0000 UTC m=+1.549536302" Nov 4 23:51:58.780959 kubelet[2832]: E1104 23:51:58.780899 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:58.782001 kubelet[2832]: E1104 23:51:58.781927 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:58.782392 kubelet[2832]: E1104 23:51:58.782368 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:51:58.841744 sudo[1852]: pam_unix(sudo:session): session closed for user root Nov 4 23:51:58.844419 sshd[1851]: Connection closed by 10.0.0.1 port 37892 Nov 4 23:51:58.845484 sshd-session[1848]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:58.852725 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:37892.service: Deactivated successfully. Nov 4 23:51:58.855702 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:51:58.856001 systemd[1]: session-7.scope: Consumed 6.823s CPU time, 257M memory peak. Nov 4 23:51:58.858651 systemd-logind[1623]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:51:58.860050 systemd-logind[1623]: Removed session 7. Nov 4 23:52:00.305571 kubelet[2832]: E1104 23:52:00.305414 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:01.759130 kubelet[2832]: E1104 23:52:01.759080 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:01.780276 kubelet[2832]: I1104 23:52:01.780238 2832 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:52:01.780608 containerd[1638]: time="2025-11-04T23:52:01.780559785Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:52:01.781042 kubelet[2832]: I1104 23:52:01.780762 2832 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:52:01.785745 kubelet[2832]: E1104 23:52:01.785724 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:02.559707 kubelet[2832]: E1104 23:52:02.559650 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:03.573065 kubelet[2832]: E1104 23:52:03.573007 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:03.636025 systemd[1]: Created slice kubepods-besteffort-poddd2354f4_a0a6_4875_8056_e9bfdaa952cd.slice - libcontainer container kubepods-besteffort-poddd2354f4_a0a6_4875_8056_e9bfdaa952cd.slice. Nov 4 23:52:03.652500 systemd[1]: Created slice kubepods-burstable-podfd16a140_06b9_436e_af33_de26c18ef27a.slice - libcontainer container kubepods-burstable-podfd16a140_06b9_436e_af33_de26c18ef27a.slice. Nov 4 23:52:03.688646 systemd[1]: Created slice kubepods-besteffort-pod17b8e2b6_209b_4eb5_b124_29c5d32cce55.slice - libcontainer container kubepods-besteffort-pod17b8e2b6_209b_4eb5_b124_29c5d32cce55.slice. Nov 4 23:52:03.692940 kubelet[2832]: I1104 23:52:03.691960 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd2354f4-a0a6-4875-8056-e9bfdaa952cd-kube-proxy\") pod \"kube-proxy-xs278\" (UID: \"dd2354f4-a0a6-4875-8056-e9bfdaa952cd\") " pod="kube-system/kube-proxy-xs278" Nov 4 23:52:03.692940 kubelet[2832]: I1104 23:52:03.692001 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7zr\" (UniqueName: \"kubernetes.io/projected/dd2354f4-a0a6-4875-8056-e9bfdaa952cd-kube-api-access-4w7zr\") pod \"kube-proxy-xs278\" (UID: \"dd2354f4-a0a6-4875-8056-e9bfdaa952cd\") " pod="kube-system/kube-proxy-xs278" Nov 4 23:52:03.692940 kubelet[2832]: I1104 23:52:03.692026 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-run\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.692940 kubelet[2832]: I1104 23:52:03.692047 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2354f4-a0a6-4875-8056-e9bfdaa952cd-lib-modules\") pod \"kube-proxy-xs278\" (UID: \"dd2354f4-a0a6-4875-8056-e9bfdaa952cd\") " pod="kube-system/kube-proxy-xs278" Nov 4 23:52:03.692940 kubelet[2832]: I1104 23:52:03.692067 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-hostproc\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.692940 kubelet[2832]: I1104 23:52:03.692087 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-xtables-lock\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.701898 kubelet[2832]: I1104 23:52:03.692104 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cni-path\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.701898 kubelet[2832]: I1104 23:52:03.692136 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-lib-modules\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.701898 kubelet[2832]: I1104 23:52:03.692214 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-config-path\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.701898 kubelet[2832]: I1104 23:52:03.692261 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-net\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.701898 kubelet[2832]: I1104 23:52:03.692290 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd2354f4-a0a6-4875-8056-e9bfdaa952cd-xtables-lock\") pod \"kube-proxy-xs278\" (UID: \"dd2354f4-a0a6-4875-8056-e9bfdaa952cd\") " pod="kube-system/kube-proxy-xs278" Nov 4 23:52:03.701898 kubelet[2832]: I1104 23:52:03.692336 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-cgroup\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.702825 kubelet[2832]: I1104 23:52:03.692357 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-bpf-maps\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.702825 kubelet[2832]: I1104 23:52:03.692374 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd16a140-06b9-436e-af33-de26c18ef27a-clustermesh-secrets\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.702825 kubelet[2832]: I1104 23:52:03.692393 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-kernel\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.702825 kubelet[2832]: I1104 23:52:03.692414 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17b8e2b6-209b-4eb5-b124-29c5d32cce55-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-flmmk\" (UID: \"17b8e2b6-209b-4eb5-b124-29c5d32cce55\") " pod="kube-system/cilium-operator-6c4d7847fc-flmmk" Nov 4 23:52:03.702825 kubelet[2832]: I1104 23:52:03.692432 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-etc-cni-netd\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.703023 kubelet[2832]: I1104 23:52:03.692453 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qvt\" (UniqueName: \"kubernetes.io/projected/17b8e2b6-209b-4eb5-b124-29c5d32cce55-kube-api-access-86qvt\") pod \"cilium-operator-6c4d7847fc-flmmk\" (UID: \"17b8e2b6-209b-4eb5-b124-29c5d32cce55\") " pod="kube-system/cilium-operator-6c4d7847fc-flmmk" Nov 4 23:52:03.703023 kubelet[2832]: I1104 23:52:03.692475 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-hubble-tls\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.703023 kubelet[2832]: I1104 23:52:03.692503 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswqc\" (UniqueName: \"kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-kube-api-access-lswqc\") pod \"cilium-h678q\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " pod="kube-system/cilium-h678q" Nov 4 23:52:03.950217 kubelet[2832]: E1104 23:52:03.949717 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:03.951133 containerd[1638]: time="2025-11-04T23:52:03.950433895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xs278,Uid:dd2354f4-a0a6-4875-8056-e9bfdaa952cd,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:03.958009 kubelet[2832]: E1104 23:52:03.957938 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:03.958752 containerd[1638]: time="2025-11-04T23:52:03.958702591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h678q,Uid:fd16a140-06b9-436e-af33-de26c18ef27a,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:03.997748 kubelet[2832]: E1104 23:52:03.997525 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:03.998153 containerd[1638]: time="2025-11-04T23:52:03.998097333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-flmmk,Uid:17b8e2b6-209b-4eb5-b124-29c5d32cce55,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:04.041585 containerd[1638]: time="2025-11-04T23:52:04.040208241Z" level=info msg="connecting to shim 1c1c032290fb922a9831fb8c165e5929df588d16c6b4fde0fe46c4d6428f5bd2" address="unix:///run/containerd/s/b8a80c9dd0659c48fd9d1d6f90ae62ad4d4aaf620e3e00f721c5692d9fdf3be5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:04.044863 containerd[1638]: time="2025-11-04T23:52:04.044806564Z" level=info msg="connecting to shim e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2" address="unix:///run/containerd/s/1c5cb98bbc2a96994f633f18b171946146b22eefd1722ce3e3529a9ff5865ebc" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:04.063580 containerd[1638]: time="2025-11-04T23:52:04.061747468Z" level=info msg="connecting to shim 6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2" address="unix:///run/containerd/s/e6f259d4e2749e109f5d0588e8678150a9d7176f54378079612049babf1e96b2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:04.121831 systemd[1]: Started cri-containerd-6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2.scope - libcontainer container 6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2. Nov 4 23:52:04.128621 systemd[1]: Started cri-containerd-1c1c032290fb922a9831fb8c165e5929df588d16c6b4fde0fe46c4d6428f5bd2.scope - libcontainer container 1c1c032290fb922a9831fb8c165e5929df588d16c6b4fde0fe46c4d6428f5bd2. Nov 4 23:52:04.132150 systemd[1]: Started cri-containerd-e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2.scope - libcontainer container e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2. Nov 4 23:52:04.181411 containerd[1638]: time="2025-11-04T23:52:04.181348249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xs278,Uid:dd2354f4-a0a6-4875-8056-e9bfdaa952cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c1c032290fb922a9831fb8c165e5929df588d16c6b4fde0fe46c4d6428f5bd2\"" Nov 4 23:52:04.183240 kubelet[2832]: E1104 23:52:04.183190 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:04.192764 containerd[1638]: time="2025-11-04T23:52:04.192691466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h678q,Uid:fd16a140-06b9-436e-af33-de26c18ef27a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\"" Nov 4 23:52:04.193619 kubelet[2832]: E1104 23:52:04.193590 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:04.194520 containerd[1638]: time="2025-11-04T23:52:04.194477435Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 23:52:04.196672 containerd[1638]: time="2025-11-04T23:52:04.196627312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-flmmk,Uid:17b8e2b6-209b-4eb5-b124-29c5d32cce55,Namespace:kube-system,Attempt:0,} returns sandbox id \"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\"" Nov 4 23:52:04.196891 containerd[1638]: time="2025-11-04T23:52:04.196860117Z" level=info msg="CreateContainer within sandbox \"1c1c032290fb922a9831fb8c165e5929df588d16c6b4fde0fe46c4d6428f5bd2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:52:04.197342 kubelet[2832]: E1104 23:52:04.197313 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:04.213342 containerd[1638]: time="2025-11-04T23:52:04.213213036Z" level=info msg="Container 7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:04.225084 containerd[1638]: time="2025-11-04T23:52:04.225040109Z" level=info msg="CreateContainer within sandbox \"1c1c032290fb922a9831fb8c165e5929df588d16c6b4fde0fe46c4d6428f5bd2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d\"" Nov 4 23:52:04.225702 containerd[1638]: time="2025-11-04T23:52:04.225649996Z" level=info msg="StartContainer for \"7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d\"" Nov 4 23:52:04.227092 containerd[1638]: time="2025-11-04T23:52:04.227059966Z" level=info msg="connecting to shim 7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d" address="unix:///run/containerd/s/b8a80c9dd0659c48fd9d1d6f90ae62ad4d4aaf620e3e00f721c5692d9fdf3be5" protocol=ttrpc version=3 Nov 4 23:52:04.255946 systemd[1]: Started cri-containerd-7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d.scope - libcontainer container 7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d. Nov 4 23:52:04.326616 containerd[1638]: time="2025-11-04T23:52:04.326277833Z" level=info msg="StartContainer for \"7ddf57f427a9e70f979ba08685898194d15450214a61ab83c41e46c98a66c70d\" returns successfully" Nov 4 23:52:04.799590 kubelet[2832]: E1104 23:52:04.797312 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:04.809897 kubelet[2832]: I1104 23:52:04.809829 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xs278" podStartSLOduration=1.8098072090000001 podStartE2EDuration="1.809807209s" podCreationTimestamp="2025-11-04 23:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:04.80864811 +0000 UTC m=+8.182722038" watchObservedRunningTime="2025-11-04 23:52:04.809807209 +0000 UTC m=+8.183881137" Nov 4 23:52:10.310060 kubelet[2832]: E1104 23:52:10.309571 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:10.810826 kubelet[2832]: E1104 23:52:10.810775 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:17.068935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239334048.mount: Deactivated successfully. Nov 4 23:52:20.722566 containerd[1638]: time="2025-11-04T23:52:20.722438022Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:20.783146 containerd[1638]: time="2025-11-04T23:52:20.783051961Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 4 23:52:20.801849 containerd[1638]: time="2025-11-04T23:52:20.801790405Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:20.803661 containerd[1638]: time="2025-11-04T23:52:20.803590800Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.609057178s" Nov 4 23:52:20.803661 containerd[1638]: time="2025-11-04T23:52:20.803637900Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 4 23:52:20.804693 containerd[1638]: time="2025-11-04T23:52:20.804651483Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 23:52:20.872634 containerd[1638]: time="2025-11-04T23:52:20.872573829Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:52:21.204363 containerd[1638]: time="2025-11-04T23:52:21.204298761Z" level=info msg="Container 33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:21.399564 containerd[1638]: time="2025-11-04T23:52:21.399450214Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\"" Nov 4 23:52:21.400580 containerd[1638]: time="2025-11-04T23:52:21.400154479Z" level=info msg="StartContainer for \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\"" Nov 4 23:52:21.401805 containerd[1638]: time="2025-11-04T23:52:21.401762930Z" level=info msg="connecting to shim 33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6" address="unix:///run/containerd/s/1c5cb98bbc2a96994f633f18b171946146b22eefd1722ce3e3529a9ff5865ebc" protocol=ttrpc version=3 Nov 4 23:52:21.434722 systemd[1]: Started cri-containerd-33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6.scope - libcontainer container 33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6. Nov 4 23:52:21.486410 systemd[1]: cri-containerd-33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6.scope: Deactivated successfully. Nov 4 23:52:21.487973 containerd[1638]: time="2025-11-04T23:52:21.487917038Z" level=info msg="StartContainer for \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" returns successfully" Nov 4 23:52:21.488819 containerd[1638]: time="2025-11-04T23:52:21.488773823Z" level=info msg="received exit event container_id:\"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" id:\"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" pid:3263 exited_at:{seconds:1762300341 nanos:488342284}" Nov 4 23:52:21.489482 containerd[1638]: time="2025-11-04T23:52:21.489444615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" id:\"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" pid:3263 exited_at:{seconds:1762300341 nanos:488342284}" Nov 4 23:52:21.518354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6-rootfs.mount: Deactivated successfully. Nov 4 23:52:21.835414 kubelet[2832]: E1104 23:52:21.835371 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:22.839455 kubelet[2832]: E1104 23:52:22.839411 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:23.054692 containerd[1638]: time="2025-11-04T23:52:23.054616962Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:52:23.304895 containerd[1638]: time="2025-11-04T23:52:23.304675253Z" level=info msg="Container 5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:23.309112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569090348.mount: Deactivated successfully. Nov 4 23:52:23.312120 containerd[1638]: time="2025-11-04T23:52:23.312072866Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\"" Nov 4 23:52:23.312629 containerd[1638]: time="2025-11-04T23:52:23.312602470Z" level=info msg="StartContainer for \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\"" Nov 4 23:52:23.313957 containerd[1638]: time="2025-11-04T23:52:23.313924838Z" level=info msg="connecting to shim 5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023" address="unix:///run/containerd/s/1c5cb98bbc2a96994f633f18b171946146b22eefd1722ce3e3529a9ff5865ebc" protocol=ttrpc version=3 Nov 4 23:52:23.339682 systemd[1]: Started cri-containerd-5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023.scope - libcontainer container 5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023. Nov 4 23:52:23.371883 containerd[1638]: time="2025-11-04T23:52:23.371828948Z" level=info msg="StartContainer for \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" returns successfully" Nov 4 23:52:23.386449 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:52:23.386730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:52:23.386799 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:52:23.389058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:52:23.391184 systemd[1]: cri-containerd-5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023.scope: Deactivated successfully. Nov 4 23:52:23.392050 containerd[1638]: time="2025-11-04T23:52:23.391768548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" id:\"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" pid:3307 exited_at:{seconds:1762300343 nanos:391312684}" Nov 4 23:52:23.392050 containerd[1638]: time="2025-11-04T23:52:23.391977814Z" level=info msg="received exit event container_id:\"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" id:\"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" pid:3307 exited_at:{seconds:1762300343 nanos:391312684}" Nov 4 23:52:23.416111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023-rootfs.mount: Deactivated successfully. Nov 4 23:52:23.427978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:52:23.843452 kubelet[2832]: E1104 23:52:23.843411 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:24.091479 containerd[1638]: time="2025-11-04T23:52:24.091425360Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:52:24.137005 containerd[1638]: time="2025-11-04T23:52:24.136855000Z" level=info msg="Container 424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:24.146253 containerd[1638]: time="2025-11-04T23:52:24.146190843Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\"" Nov 4 23:52:24.147089 containerd[1638]: time="2025-11-04T23:52:24.147045633Z" level=info msg="StartContainer for \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\"" Nov 4 23:52:24.148794 containerd[1638]: time="2025-11-04T23:52:24.148762837Z" level=info msg="connecting to shim 424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586" address="unix:///run/containerd/s/1c5cb98bbc2a96994f633f18b171946146b22eefd1722ce3e3529a9ff5865ebc" protocol=ttrpc version=3 Nov 4 23:52:24.181813 systemd[1]: Started cri-containerd-424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586.scope - libcontainer container 424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586. Nov 4 23:52:24.240987 systemd[1]: cri-containerd-424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586.scope: Deactivated successfully. Nov 4 23:52:24.241781 systemd[1]: cri-containerd-424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586.scope: Consumed 33ms CPU time, 7.7M memory peak, 3.9M read from disk. Nov 4 23:52:24.243265 containerd[1638]: time="2025-11-04T23:52:24.243145054Z" level=info msg="received exit event container_id:\"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" id:\"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" pid:3369 exited_at:{seconds:1762300344 nanos:242939425}" Nov 4 23:52:24.243372 containerd[1638]: time="2025-11-04T23:52:24.243295039Z" level=info msg="StartContainer for \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" returns successfully" Nov 4 23:52:24.243736 containerd[1638]: time="2025-11-04T23:52:24.243427539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" id:\"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" pid:3369 exited_at:{seconds:1762300344 nanos:242939425}" Nov 4 23:52:24.481608 containerd[1638]: time="2025-11-04T23:52:24.481435230Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:24.482372 containerd[1638]: time="2025-11-04T23:52:24.482326910Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 4 23:52:24.484048 containerd[1638]: time="2025-11-04T23:52:24.484006313Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:24.486001 containerd[1638]: time="2025-11-04T23:52:24.485954865Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.681269418s" Nov 4 23:52:24.486091 containerd[1638]: time="2025-11-04T23:52:24.486003417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 4 23:52:24.491392 containerd[1638]: time="2025-11-04T23:52:24.491337417Z" level=info msg="CreateContainer within sandbox \"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 23:52:24.500213 containerd[1638]: time="2025-11-04T23:52:24.499988814Z" level=info msg="Container e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:24.507651 containerd[1638]: time="2025-11-04T23:52:24.507599868Z" level=info msg="CreateContainer within sandbox \"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\"" Nov 4 23:52:24.508230 containerd[1638]: time="2025-11-04T23:52:24.508169869Z" level=info msg="StartContainer for \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\"" Nov 4 23:52:24.509227 containerd[1638]: time="2025-11-04T23:52:24.509183509Z" level=info msg="connecting to shim e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2" address="unix:///run/containerd/s/e6f259d4e2749e109f5d0588e8678150a9d7176f54378079612049babf1e96b2" protocol=ttrpc version=3 Nov 4 23:52:24.531887 systemd[1]: Started cri-containerd-e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2.scope - libcontainer container e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2. Nov 4 23:52:24.567859 containerd[1638]: time="2025-11-04T23:52:24.567702346Z" level=info msg="StartContainer for \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" returns successfully" Nov 4 23:52:24.847463 kubelet[2832]: E1104 23:52:24.847411 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:24.851100 kubelet[2832]: E1104 23:52:24.851076 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.362927 containerd[1638]: time="2025-11-04T23:52:25.362869008Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:52:25.433573 kubelet[2832]: I1104 23:52:25.433147 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-flmmk" podStartSLOduration=2.144828825 podStartE2EDuration="22.433123625s" podCreationTimestamp="2025-11-04 23:52:03 +0000 UTC" firstStartedPulling="2025-11-04 23:52:04.198511689 +0000 UTC m=+7.572585617" lastFinishedPulling="2025-11-04 23:52:24.486806489 +0000 UTC m=+27.860880417" observedRunningTime="2025-11-04 23:52:25.431819764 +0000 UTC m=+28.805893702" watchObservedRunningTime="2025-11-04 23:52:25.433123625 +0000 UTC m=+28.807197553" Nov 4 23:52:25.450357 containerd[1638]: time="2025-11-04T23:52:25.450287757Z" level=info msg="Container bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:25.464248 containerd[1638]: time="2025-11-04T23:52:25.464183196Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\"" Nov 4 23:52:25.465741 containerd[1638]: time="2025-11-04T23:52:25.465682557Z" level=info msg="StartContainer for \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\"" Nov 4 23:52:25.466971 containerd[1638]: time="2025-11-04T23:52:25.466925963Z" level=info msg="connecting to shim bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81" address="unix:///run/containerd/s/1c5cb98bbc2a96994f633f18b171946146b22eefd1722ce3e3529a9ff5865ebc" protocol=ttrpc version=3 Nov 4 23:52:25.501711 systemd[1]: Started cri-containerd-bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81.scope - libcontainer container bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81. Nov 4 23:52:25.547245 systemd[1]: cri-containerd-bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81.scope: Deactivated successfully. Nov 4 23:52:25.549524 containerd[1638]: time="2025-11-04T23:52:25.549484242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" id:\"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" pid:3446 exited_at:{seconds:1762300345 nanos:548021922}" Nov 4 23:52:25.549967 containerd[1638]: time="2025-11-04T23:52:25.549919858Z" level=info msg="received exit event container_id:\"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" id:\"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" pid:3446 exited_at:{seconds:1762300345 nanos:548021922}" Nov 4 23:52:25.552855 containerd[1638]: time="2025-11-04T23:52:25.552799734Z" level=info msg="StartContainer for \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" returns successfully" Nov 4 23:52:25.575633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81-rootfs.mount: Deactivated successfully. Nov 4 23:52:25.858581 kubelet[2832]: E1104 23:52:25.858519 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.861565 kubelet[2832]: E1104 23:52:25.859399 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:26.031170 containerd[1638]: time="2025-11-04T23:52:26.031102147Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:52:26.174841 containerd[1638]: time="2025-11-04T23:52:26.173676914Z" level=info msg="Container 3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:26.182919 containerd[1638]: time="2025-11-04T23:52:26.182862494Z" level=info msg="CreateContainer within sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\"" Nov 4 23:52:26.183577 containerd[1638]: time="2025-11-04T23:52:26.183516323Z" level=info msg="StartContainer for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\"" Nov 4 23:52:26.184749 containerd[1638]: time="2025-11-04T23:52:26.184694866Z" level=info msg="connecting to shim 3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9" address="unix:///run/containerd/s/1c5cb98bbc2a96994f633f18b171946146b22eefd1722ce3e3529a9ff5865ebc" protocol=ttrpc version=3 Nov 4 23:52:26.213735 systemd[1]: Started cri-containerd-3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9.scope - libcontainer container 3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9. Nov 4 23:52:26.395329 containerd[1638]: time="2025-11-04T23:52:26.395290046Z" level=info msg="StartContainer for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" returns successfully" Nov 4 23:52:26.454304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516436366.mount: Deactivated successfully. Nov 4 23:52:26.507353 containerd[1638]: time="2025-11-04T23:52:26.507301715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" id:\"0c295e9291edecd2c39982dcf17ea465684f710d64a69260329406bc9d4e33e1\" pid:3522 exited_at:{seconds:1762300346 nanos:506919061}" Nov 4 23:52:26.565328 kubelet[2832]: I1104 23:52:26.565273 2832 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:52:27.131031 kubelet[2832]: E1104 23:52:27.130905 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:27.143137 systemd[1]: Created slice kubepods-burstable-pod4cb5bf6e_537d_41e4_829f_176d9f2f75d8.slice - libcontainer container kubepods-burstable-pod4cb5bf6e_537d_41e4_829f_176d9f2f75d8.slice. Nov 4 23:52:27.157301 systemd[1]: Created slice kubepods-burstable-podb1baa17b_7812_4513_ad2e_c41b12d09323.slice - libcontainer container kubepods-burstable-podb1baa17b_7812_4513_ad2e_c41b12d09323.slice. Nov 4 23:52:27.275234 kubelet[2832]: I1104 23:52:27.275173 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lvq\" (UniqueName: \"kubernetes.io/projected/4cb5bf6e-537d-41e4-829f-176d9f2f75d8-kube-api-access-z5lvq\") pod \"coredns-674b8bbfcf-2k6rv\" (UID: \"4cb5bf6e-537d-41e4-829f-176d9f2f75d8\") " pod="kube-system/coredns-674b8bbfcf-2k6rv" Nov 4 23:52:27.275234 kubelet[2832]: I1104 23:52:27.275231 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cb5bf6e-537d-41e4-829f-176d9f2f75d8-config-volume\") pod \"coredns-674b8bbfcf-2k6rv\" (UID: \"4cb5bf6e-537d-41e4-829f-176d9f2f75d8\") " pod="kube-system/coredns-674b8bbfcf-2k6rv" Nov 4 23:52:27.275426 kubelet[2832]: I1104 23:52:27.275263 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1baa17b-7812-4513-ad2e-c41b12d09323-config-volume\") pod \"coredns-674b8bbfcf-xvbt5\" (UID: \"b1baa17b-7812-4513-ad2e-c41b12d09323\") " pod="kube-system/coredns-674b8bbfcf-xvbt5" Nov 4 23:52:27.275426 kubelet[2832]: I1104 23:52:27.275281 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g696v\" (UniqueName: \"kubernetes.io/projected/b1baa17b-7812-4513-ad2e-c41b12d09323-kube-api-access-g696v\") pod \"coredns-674b8bbfcf-xvbt5\" (UID: \"b1baa17b-7812-4513-ad2e-c41b12d09323\") " pod="kube-system/coredns-674b8bbfcf-xvbt5" Nov 4 23:52:27.748803 kubelet[2832]: E1104 23:52:27.748729 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:27.749637 containerd[1638]: time="2025-11-04T23:52:27.749581885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2k6rv,Uid:4cb5bf6e-537d-41e4-829f-176d9f2f75d8,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:27.751313 kubelet[2832]: I1104 23:52:27.751245 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h678q" podStartSLOduration=8.140799393 podStartE2EDuration="24.751226429s" podCreationTimestamp="2025-11-04 23:52:03 +0000 UTC" firstStartedPulling="2025-11-04 23:52:04.194073902 +0000 UTC m=+7.568147830" lastFinishedPulling="2025-11-04 23:52:20.804500938 +0000 UTC m=+24.178574866" observedRunningTime="2025-11-04 23:52:27.397850555 +0000 UTC m=+30.771924483" watchObservedRunningTime="2025-11-04 23:52:27.751226429 +0000 UTC m=+31.125300357" Nov 4 23:52:27.761195 kubelet[2832]: E1104 23:52:27.761149 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:27.761730 containerd[1638]: time="2025-11-04T23:52:27.761684587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvbt5,Uid:b1baa17b-7812-4513-ad2e-c41b12d09323,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:28.107649 kubelet[2832]: E1104 23:52:28.107620 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:29.110134 kubelet[2832]: E1104 23:52:29.110095 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:29.812324 systemd-networkd[1532]: cilium_host: Link UP Nov 4 23:52:29.812494 systemd-networkd[1532]: cilium_net: Link UP Nov 4 23:52:29.812734 systemd-networkd[1532]: cilium_net: Gained carrier Nov 4 23:52:29.812956 systemd-networkd[1532]: cilium_host: Gained carrier Nov 4 23:52:29.931668 systemd-networkd[1532]: cilium_vxlan: Link UP Nov 4 23:52:29.931973 systemd-networkd[1532]: cilium_vxlan: Gained carrier Nov 4 23:52:30.012837 systemd-networkd[1532]: cilium_net: Gained IPv6LL Nov 4 23:52:30.200588 kernel: NET: Registered PF_ALG protocol family Nov 4 23:52:30.701871 systemd-networkd[1532]: cilium_host: Gained IPv6LL Nov 4 23:52:31.057602 systemd-networkd[1532]: lxc_health: Link UP Nov 4 23:52:31.059451 systemd-networkd[1532]: lxc_health: Gained carrier Nov 4 23:52:31.340822 systemd-networkd[1532]: cilium_vxlan: Gained IPv6LL Nov 4 23:52:31.485674 kernel: eth0: renamed from tmp694d8 Nov 4 23:52:31.486558 systemd-networkd[1532]: lxcdaf0b1e35bf6: Link UP Nov 4 23:52:31.487058 systemd-networkd[1532]: lxcdaf0b1e35bf6: Gained carrier Nov 4 23:52:31.575760 systemd-networkd[1532]: lxca7878866ad4b: Link UP Nov 4 23:52:31.578617 kernel: eth0: renamed from tmp68884 Nov 4 23:52:31.579686 systemd-networkd[1532]: lxca7878866ad4b: Gained carrier Nov 4 23:52:31.960824 kubelet[2832]: E1104 23:52:31.960616 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:32.117296 kubelet[2832]: E1104 23:52:32.117255 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:32.300844 systemd-networkd[1532]: lxc_health: Gained IPv6LL Nov 4 23:52:32.876812 systemd-networkd[1532]: lxca7878866ad4b: Gained IPv6LL Nov 4 23:52:33.119264 kubelet[2832]: E1104 23:52:33.119222 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:33.388868 systemd-networkd[1532]: lxcdaf0b1e35bf6: Gained IPv6LL Nov 4 23:52:34.226749 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:55828.service - OpenSSH per-connection server daemon (10.0.0.1:55828). Nov 4 23:52:34.303239 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 55828 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:34.306100 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:34.313376 systemd-logind[1623]: New session 8 of user core. Nov 4 23:52:34.318862 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:52:34.669490 sshd[3990]: Connection closed by 10.0.0.1 port 55828 Nov 4 23:52:34.669939 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:34.675795 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:55828.service: Deactivated successfully. Nov 4 23:52:34.678654 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:52:34.680575 systemd-logind[1623]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:52:34.682097 systemd-logind[1623]: Removed session 8. Nov 4 23:52:35.220337 containerd[1638]: time="2025-11-04T23:52:35.219750114Z" level=info msg="connecting to shim 694d800836e4aad4b6cc11651c415be5371631b96b9a33923e90501d8ae5fd13" address="unix:///run/containerd/s/8a6d1c0d2ecd5a755610b2f6abd189837f37c52efdef89ceeac11a4eabe59213" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:35.232349 containerd[1638]: time="2025-11-04T23:52:35.232285514Z" level=info msg="connecting to shim 688849e8ce77dc0aa029ea4b53dffd3a43f0313142e71c7ebe33af215f9dc8a9" address="unix:///run/containerd/s/10ec15504493063a416f777e4b9ac366d73d70cfea3e9668667f3c30d194f804" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:35.263936 systemd[1]: Started cri-containerd-694d800836e4aad4b6cc11651c415be5371631b96b9a33923e90501d8ae5fd13.scope - libcontainer container 694d800836e4aad4b6cc11651c415be5371631b96b9a33923e90501d8ae5fd13. Nov 4 23:52:35.269602 systemd[1]: Started cri-containerd-688849e8ce77dc0aa029ea4b53dffd3a43f0313142e71c7ebe33af215f9dc8a9.scope - libcontainer container 688849e8ce77dc0aa029ea4b53dffd3a43f0313142e71c7ebe33af215f9dc8a9. Nov 4 23:52:35.281077 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:52:35.289508 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:52:35.316370 containerd[1638]: time="2025-11-04T23:52:35.316286012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2k6rv,Uid:4cb5bf6e-537d-41e4-829f-176d9f2f75d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"694d800836e4aad4b6cc11651c415be5371631b96b9a33923e90501d8ae5fd13\"" Nov 4 23:52:35.324961 kubelet[2832]: E1104 23:52:35.324925 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:35.331448 containerd[1638]: time="2025-11-04T23:52:35.331360666Z" level=info msg="CreateContainer within sandbox \"694d800836e4aad4b6cc11651c415be5371631b96b9a33923e90501d8ae5fd13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:52:35.339022 containerd[1638]: time="2025-11-04T23:52:35.338981210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvbt5,Uid:b1baa17b-7812-4513-ad2e-c41b12d09323,Namespace:kube-system,Attempt:0,} returns sandbox id \"688849e8ce77dc0aa029ea4b53dffd3a43f0313142e71c7ebe33af215f9dc8a9\"" Nov 4 23:52:35.339953 kubelet[2832]: E1104 23:52:35.339882 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:35.349217 containerd[1638]: time="2025-11-04T23:52:35.346288733Z" level=info msg="CreateContainer within sandbox \"688849e8ce77dc0aa029ea4b53dffd3a43f0313142e71c7ebe33af215f9dc8a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:52:35.351337 containerd[1638]: time="2025-11-04T23:52:35.351285003Z" level=info msg="Container 574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:35.357703 containerd[1638]: time="2025-11-04T23:52:35.357628843Z" level=info msg="Container 8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:35.363578 containerd[1638]: time="2025-11-04T23:52:35.363473487Z" level=info msg="CreateContainer within sandbox \"694d800836e4aad4b6cc11651c415be5371631b96b9a33923e90501d8ae5fd13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4\"" Nov 4 23:52:35.365016 containerd[1638]: time="2025-11-04T23:52:35.364965390Z" level=info msg="StartContainer for \"574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4\"" Nov 4 23:52:35.366402 containerd[1638]: time="2025-11-04T23:52:35.366344078Z" level=info msg="connecting to shim 574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4" address="unix:///run/containerd/s/8a6d1c0d2ecd5a755610b2f6abd189837f37c52efdef89ceeac11a4eabe59213" protocol=ttrpc version=3 Nov 4 23:52:35.371275 containerd[1638]: time="2025-11-04T23:52:35.371214941Z" level=info msg="CreateContainer within sandbox \"688849e8ce77dc0aa029ea4b53dffd3a43f0313142e71c7ebe33af215f9dc8a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a\"" Nov 4 23:52:35.375711 containerd[1638]: time="2025-11-04T23:52:35.375644109Z" level=info msg="StartContainer for \"8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a\"" Nov 4 23:52:35.378118 containerd[1638]: time="2025-11-04T23:52:35.377871783Z" level=info msg="connecting to shim 8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a" address="unix:///run/containerd/s/10ec15504493063a416f777e4b9ac366d73d70cfea3e9668667f3c30d194f804" protocol=ttrpc version=3 Nov 4 23:52:35.397703 systemd[1]: Started cri-containerd-574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4.scope - libcontainer container 574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4. Nov 4 23:52:35.429825 systemd[1]: Started cri-containerd-8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a.scope - libcontainer container 8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a. Nov 4 23:52:35.460169 containerd[1638]: time="2025-11-04T23:52:35.460109316Z" level=info msg="StartContainer for \"574735a99f8c10a8e3886d5ee607592c9ae00cb546b5d9e4fdf3ca25e112f8b4\" returns successfully" Nov 4 23:52:35.611261 containerd[1638]: time="2025-11-04T23:52:35.611179176Z" level=info msg="StartContainer for \"8c71bd49ae6e70ea897a8738a228eec857c44440b4bc7c665a75421f8ad3eb5a\" returns successfully" Nov 4 23:52:36.131342 kubelet[2832]: E1104 23:52:36.130871 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:36.133553 kubelet[2832]: E1104 23:52:36.133458 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:36.212941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315017000.mount: Deactivated successfully. Nov 4 23:52:36.289881 kubelet[2832]: I1104 23:52:36.289198 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2k6rv" podStartSLOduration=33.289169027 podStartE2EDuration="33.289169027s" podCreationTimestamp="2025-11-04 23:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:36.28860542 +0000 UTC m=+39.662679338" watchObservedRunningTime="2025-11-04 23:52:36.289169027 +0000 UTC m=+39.663242955" Nov 4 23:52:36.290162 kubelet[2832]: I1104 23:52:36.290125 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xvbt5" podStartSLOduration=33.290113463 podStartE2EDuration="33.290113463s" podCreationTimestamp="2025-11-04 23:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:36.268849997 +0000 UTC m=+39.642923925" watchObservedRunningTime="2025-11-04 23:52:36.290113463 +0000 UTC m=+39.664187401" Nov 4 23:52:37.136828 kubelet[2832]: E1104 23:52:37.136213 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:37.136828 kubelet[2832]: E1104 23:52:37.136429 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:38.138444 kubelet[2832]: E1104 23:52:38.138308 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:38.139017 kubelet[2832]: E1104 23:52:38.138617 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:39.688411 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:55832.service - OpenSSH per-connection server daemon (10.0.0.1:55832). Nov 4 23:52:39.768350 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 55832 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:39.770206 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:39.775437 systemd-logind[1623]: New session 9 of user core. Nov 4 23:52:39.783791 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:52:39.971758 sshd[4183]: Connection closed by 10.0.0.1 port 55832 Nov 4 23:52:39.972047 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:39.978115 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:55832.service: Deactivated successfully. Nov 4 23:52:39.980623 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:52:39.981895 systemd-logind[1623]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:52:39.983172 systemd-logind[1623]: Removed session 9. Nov 4 23:52:44.988959 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Nov 4 23:52:45.047182 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:45.048627 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:45.053695 systemd-logind[1623]: New session 10 of user core. Nov 4 23:52:45.067772 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:52:45.348262 sshd[4200]: Connection closed by 10.0.0.1 port 49962 Nov 4 23:52:45.348676 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:45.353522 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:49962.service: Deactivated successfully. Nov 4 23:52:45.355811 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:52:45.356827 systemd-logind[1623]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:52:45.358330 systemd-logind[1623]: Removed session 10. Nov 4 23:52:50.369798 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:49972.service - OpenSSH per-connection server daemon (10.0.0.1:49972). Nov 4 23:52:50.442900 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 49972 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:50.445139 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:50.451429 systemd-logind[1623]: New session 11 of user core. Nov 4 23:52:50.456792 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:52:50.586037 sshd[4218]: Connection closed by 10.0.0.1 port 49972 Nov 4 23:52:50.586398 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:50.590735 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:49972.service: Deactivated successfully. Nov 4 23:52:50.592859 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:52:50.594035 systemd-logind[1623]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:52:50.595397 systemd-logind[1623]: Removed session 11. Nov 4 23:52:55.606504 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:44890.service - OpenSSH per-connection server daemon (10.0.0.1:44890). Nov 4 23:52:55.669822 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 44890 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:55.671489 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:55.676437 systemd-logind[1623]: New session 12 of user core. Nov 4 23:52:55.690743 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:52:55.815669 sshd[4235]: Connection closed by 10.0.0.1 port 44890 Nov 4 23:52:55.816137 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:55.826230 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:44890.service: Deactivated successfully. Nov 4 23:52:55.828392 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:52:55.829272 systemd-logind[1623]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:52:55.832693 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:44892.service - OpenSSH per-connection server daemon (10.0.0.1:44892). Nov 4 23:52:55.833414 systemd-logind[1623]: Removed session 12. Nov 4 23:52:55.895990 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 44892 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:55.898114 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:55.905519 systemd-logind[1623]: New session 13 of user core. Nov 4 23:52:55.921866 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:52:56.207756 sshd[4252]: Connection closed by 10.0.0.1 port 44892 Nov 4 23:52:56.208074 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:56.219713 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:44892.service: Deactivated successfully. Nov 4 23:52:56.222579 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:52:56.223991 systemd-logind[1623]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:52:56.227444 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:44900.service - OpenSSH per-connection server daemon (10.0.0.1:44900). Nov 4 23:52:56.229036 systemd-logind[1623]: Removed session 13. Nov 4 23:52:56.294746 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 44900 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:52:56.296407 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:56.301447 systemd-logind[1623]: New session 14 of user core. Nov 4 23:52:56.312700 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:52:56.441031 sshd[4267]: Connection closed by 10.0.0.1 port 44900 Nov 4 23:52:56.441430 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:56.449611 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:44900.service: Deactivated successfully. Nov 4 23:52:56.452452 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:52:56.453652 systemd-logind[1623]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:52:56.455275 systemd-logind[1623]: Removed session 14. Nov 4 23:53:01.459524 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). Nov 4 23:53:01.533490 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:01.535103 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:01.539820 systemd-logind[1623]: New session 15 of user core. Nov 4 23:53:01.549784 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:53:01.675072 sshd[4287]: Connection closed by 10.0.0.1 port 44916 Nov 4 23:53:01.675465 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:01.680943 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:44916.service: Deactivated successfully. Nov 4 23:53:01.683276 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:53:01.684302 systemd-logind[1623]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:53:01.685903 systemd-logind[1623]: Removed session 15. Nov 4 23:53:06.690903 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:39520.service - OpenSSH per-connection server daemon (10.0.0.1:39520). Nov 4 23:53:06.763415 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 39520 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:06.765463 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:06.770630 systemd-logind[1623]: New session 16 of user core. Nov 4 23:53:06.788975 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:53:06.930105 sshd[4306]: Connection closed by 10.0.0.1 port 39520 Nov 4 23:53:06.932804 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:06.938758 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:39520.service: Deactivated successfully. Nov 4 23:53:06.941382 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:53:06.942397 systemd-logind[1623]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:53:06.944096 systemd-logind[1623]: Removed session 16. Nov 4 23:53:11.948188 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:39534.service - OpenSSH per-connection server daemon (10.0.0.1:39534). Nov 4 23:53:12.009050 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 39534 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:12.010839 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:12.016325 systemd-logind[1623]: New session 17 of user core. Nov 4 23:53:12.022714 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:53:12.160139 sshd[4322]: Connection closed by 10.0.0.1 port 39534 Nov 4 23:53:12.160507 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:12.173049 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:39534.service: Deactivated successfully. Nov 4 23:53:12.175531 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:53:12.176423 systemd-logind[1623]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:53:12.180051 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:39536.service - OpenSSH per-connection server daemon (10.0.0.1:39536). Nov 4 23:53:12.180844 systemd-logind[1623]: Removed session 17. Nov 4 23:53:12.248821 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 39536 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:12.250859 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:12.257437 systemd-logind[1623]: New session 18 of user core. Nov 4 23:53:12.268942 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:53:12.771338 kubelet[2832]: E1104 23:53:12.770599 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:12.847352 sshd[4338]: Connection closed by 10.0.0.1 port 39536 Nov 4 23:53:12.847780 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:12.858729 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:39536.service: Deactivated successfully. Nov 4 23:53:12.861460 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:53:12.862583 systemd-logind[1623]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:53:12.867018 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:39552.service - OpenSSH per-connection server daemon (10.0.0.1:39552). Nov 4 23:53:12.867780 systemd-logind[1623]: Removed session 18. Nov 4 23:53:12.929524 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 39552 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:12.931798 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:12.936905 systemd-logind[1623]: New session 19 of user core. Nov 4 23:53:12.946703 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:53:14.042111 sshd[4353]: Connection closed by 10.0.0.1 port 39552 Nov 4 23:53:14.042848 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:14.055465 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:39552.service: Deactivated successfully. Nov 4 23:53:14.058225 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:53:14.060987 systemd-logind[1623]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:53:14.067267 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:55044.service - OpenSSH per-connection server daemon (10.0.0.1:55044). Nov 4 23:53:14.070118 systemd-logind[1623]: Removed session 19. Nov 4 23:53:14.133174 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 55044 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:14.135220 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:14.140664 systemd-logind[1623]: New session 20 of user core. Nov 4 23:53:14.154782 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:53:14.570321 sshd[4375]: Connection closed by 10.0.0.1 port 55044 Nov 4 23:53:14.570858 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:14.584081 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:55044.service: Deactivated successfully. Nov 4 23:53:14.587207 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:53:14.588812 systemd-logind[1623]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:53:14.595417 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:55050.service - OpenSSH per-connection server daemon (10.0.0.1:55050). Nov 4 23:53:14.597265 systemd-logind[1623]: Removed session 20. Nov 4 23:53:14.663518 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 55050 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:14.665258 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:14.670283 systemd-logind[1623]: New session 21 of user core. Nov 4 23:53:14.687743 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:53:14.818859 sshd[4389]: Connection closed by 10.0.0.1 port 55050 Nov 4 23:53:14.819311 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:14.824503 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:55050.service: Deactivated successfully. Nov 4 23:53:14.826816 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:53:14.829191 systemd-logind[1623]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:53:14.830382 systemd-logind[1623]: Removed session 21. Nov 4 23:53:19.838070 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:55052.service - OpenSSH per-connection server daemon (10.0.0.1:55052). Nov 4 23:53:19.894595 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 55052 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:19.896311 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:19.901484 systemd-logind[1623]: New session 22 of user core. Nov 4 23:53:19.911771 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:53:20.033003 sshd[4406]: Connection closed by 10.0.0.1 port 55052 Nov 4 23:53:20.033334 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:20.038250 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:55052.service: Deactivated successfully. Nov 4 23:53:20.041370 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:53:20.043107 systemd-logind[1623]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:53:20.045220 systemd-logind[1623]: Removed session 22. Nov 4 23:53:22.765392 kubelet[2832]: E1104 23:53:22.765308 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:25.052769 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:46032.service - OpenSSH per-connection server daemon (10.0.0.1:46032). Nov 4 23:53:25.123563 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 46032 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:25.125739 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:25.130820 systemd-logind[1623]: New session 23 of user core. Nov 4 23:53:25.139668 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:53:25.249927 sshd[4424]: Connection closed by 10.0.0.1 port 46032 Nov 4 23:53:25.250320 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:25.255047 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:46032.service: Deactivated successfully. Nov 4 23:53:25.257465 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:53:25.258983 systemd-logind[1623]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:53:25.260398 systemd-logind[1623]: Removed session 23. Nov 4 23:53:27.765796 kubelet[2832]: E1104 23:53:27.765741 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:30.263308 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:46038.service - OpenSSH per-connection server daemon (10.0.0.1:46038). Nov 4 23:53:30.319273 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 46038 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:30.321007 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:30.327793 systemd-logind[1623]: New session 24 of user core. Nov 4 23:53:30.337698 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:53:30.460094 sshd[4440]: Connection closed by 10.0.0.1 port 46038 Nov 4 23:53:30.460627 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:30.473344 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:46038.service: Deactivated successfully. Nov 4 23:53:30.475404 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:53:30.476305 systemd-logind[1623]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:53:30.478811 systemd-logind[1623]: Removed session 24. Nov 4 23:53:30.480145 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:46054.service - OpenSSH per-connection server daemon (10.0.0.1:46054). Nov 4 23:53:30.540276 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:30.542844 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:30.548301 systemd-logind[1623]: New session 25 of user core. Nov 4 23:53:30.560786 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:53:32.162303 containerd[1638]: time="2025-11-04T23:53:32.162147571Z" level=info msg="StopContainer for \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" with timeout 30 (s)" Nov 4 23:53:32.174921 containerd[1638]: time="2025-11-04T23:53:32.174802012Z" level=info msg="Stop container \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" with signal terminated" Nov 4 23:53:32.191853 systemd[1]: cri-containerd-e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2.scope: Deactivated successfully. Nov 4 23:53:32.194301 containerd[1638]: time="2025-11-04T23:53:32.194257673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" id:\"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" pid:3411 exited_at:{seconds:1762300412 nanos:193834486}" Nov 4 23:53:32.195741 containerd[1638]: time="2025-11-04T23:53:32.195710365Z" level=info msg="received exit event container_id:\"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" id:\"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" pid:3411 exited_at:{seconds:1762300412 nanos:193834486}" Nov 4 23:53:32.222161 containerd[1638]: time="2025-11-04T23:53:32.222115759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" id:\"2eede5bff7cd146da01ed7fbdda463f5433a75f2f31203e02854cdeb61c0fbcc\" pid:4485 exited_at:{seconds:1762300412 nanos:221747447}" Nov 4 23:53:32.225704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2-rootfs.mount: Deactivated successfully. Nov 4 23:53:32.227106 containerd[1638]: time="2025-11-04T23:53:32.227070676Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:53:32.247921 containerd[1638]: time="2025-11-04T23:53:32.247864320Z" level=info msg="StopContainer for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" with timeout 2 (s)" Nov 4 23:53:32.248350 containerd[1638]: time="2025-11-04T23:53:32.248298780Z" level=info msg="Stop container \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" with signal terminated" Nov 4 23:53:32.255360 containerd[1638]: time="2025-11-04T23:53:32.255263973Z" level=info msg="StopContainer for \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" returns successfully" Nov 4 23:53:32.256574 containerd[1638]: time="2025-11-04T23:53:32.256479462Z" level=info msg="StopPodSandbox for \"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\"" Nov 4 23:53:32.256758 containerd[1638]: time="2025-11-04T23:53:32.256716274Z" level=info msg="Container to stop \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:53:32.258502 systemd-networkd[1532]: lxc_health: Link DOWN Nov 4 23:53:32.258522 systemd-networkd[1532]: lxc_health: Lost carrier Nov 4 23:53:32.272684 systemd[1]: cri-containerd-6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2.scope: Deactivated successfully. Nov 4 23:53:32.278988 containerd[1638]: time="2025-11-04T23:53:32.278806564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" id:\"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" pid:3016 exit_status:137 exited_at:{seconds:1762300412 nanos:277820812}" Nov 4 23:53:32.285048 systemd[1]: cri-containerd-3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9.scope: Deactivated successfully. Nov 4 23:53:32.285574 systemd[1]: cri-containerd-3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9.scope: Consumed 7.387s CPU time, 127.6M memory peak, 727K read from disk, 13.3M written to disk. Nov 4 23:53:32.287821 containerd[1638]: time="2025-11-04T23:53:32.287722339Z" level=info msg="received exit event container_id:\"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" id:\"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" pid:3482 exited_at:{seconds:1762300412 nanos:287507218}" Nov 4 23:53:32.314078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2-rootfs.mount: Deactivated successfully. Nov 4 23:53:32.317255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9-rootfs.mount: Deactivated successfully. Nov 4 23:53:32.451100 containerd[1638]: time="2025-11-04T23:53:32.450400639Z" level=info msg="shim disconnected" id=6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2 namespace=k8s.io Nov 4 23:53:32.451100 containerd[1638]: time="2025-11-04T23:53:32.450450034Z" level=warning msg="cleaning up after shim disconnected" id=6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2 namespace=k8s.io Nov 4 23:53:32.451100 containerd[1638]: time="2025-11-04T23:53:32.450460934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:53:32.451100 containerd[1638]: time="2025-11-04T23:53:32.450599218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" id:\"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" pid:3482 exited_at:{seconds:1762300412 nanos:287507218}" Nov 4 23:53:32.453654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2-shm.mount: Deactivated successfully. Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.453915958Z" level=info msg="StopContainer for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" returns successfully" Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.455293076Z" level=info msg="StopPodSandbox for \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\"" Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.455367148Z" level=info msg="Container to stop \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.455381425Z" level=info msg="Container to stop \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.455392646Z" level=info msg="Container to stop \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.455403507Z" level=info msg="Container to stop \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:53:32.455429 containerd[1638]: time="2025-11-04T23:53:32.455413806Z" level=info msg="Container to stop \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:53:32.460429 containerd[1638]: time="2025-11-04T23:53:32.460372040Z" level=info msg="TearDown network for sandbox \"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" successfully" Nov 4 23:53:32.460754 containerd[1638]: time="2025-11-04T23:53:32.460709884Z" level=info msg="StopPodSandbox for \"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" returns successfully" Nov 4 23:53:32.466573 systemd[1]: cri-containerd-e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2.scope: Deactivated successfully. Nov 4 23:53:32.468181 containerd[1638]: time="2025-11-04T23:53:32.468126439Z" level=info msg="received exit event sandbox_id:\"6845f0d5d58ef3c30c8c4f8624b62c3bc87bd47280b2dc319325f2526c2e4cf2\" exit_status:137 exited_at:{seconds:1762300412 nanos:277820812}" Nov 4 23:53:32.469807 containerd[1638]: time="2025-11-04T23:53:32.469677218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" id:\"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" pid:3025 exit_status:137 exited_at:{seconds:1762300412 nanos:468139844}" Nov 4 23:53:32.500358 kubelet[2832]: I1104 23:53:32.500295 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86qvt\" (UniqueName: \"kubernetes.io/projected/17b8e2b6-209b-4eb5-b124-29c5d32cce55-kube-api-access-86qvt\") pod \"17b8e2b6-209b-4eb5-b124-29c5d32cce55\" (UID: \"17b8e2b6-209b-4eb5-b124-29c5d32cce55\") " Nov 4 23:53:32.502464 kubelet[2832]: I1104 23:53:32.500725 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17b8e2b6-209b-4eb5-b124-29c5d32cce55-cilium-config-path\") pod \"17b8e2b6-209b-4eb5-b124-29c5d32cce55\" (UID: \"17b8e2b6-209b-4eb5-b124-29c5d32cce55\") " Nov 4 23:53:32.502075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2-rootfs.mount: Deactivated successfully. Nov 4 23:53:32.507808 kubelet[2832]: I1104 23:53:32.507728 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b8e2b6-209b-4eb5-b124-29c5d32cce55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "17b8e2b6-209b-4eb5-b124-29c5d32cce55" (UID: "17b8e2b6-209b-4eb5-b124-29c5d32cce55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:53:32.507941 kubelet[2832]: I1104 23:53:32.507910 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b8e2b6-209b-4eb5-b124-29c5d32cce55-kube-api-access-86qvt" (OuterVolumeSpecName: "kube-api-access-86qvt") pod "17b8e2b6-209b-4eb5-b124-29c5d32cce55" (UID: "17b8e2b6-209b-4eb5-b124-29c5d32cce55"). InnerVolumeSpecName "kube-api-access-86qvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:53:32.516953 containerd[1638]: time="2025-11-04T23:53:32.516883161Z" level=info msg="shim disconnected" id=e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2 namespace=k8s.io Nov 4 23:53:32.516953 containerd[1638]: time="2025-11-04T23:53:32.516937413Z" level=warning msg="cleaning up after shim disconnected" id=e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2 namespace=k8s.io Nov 4 23:53:32.517125 containerd[1638]: time="2025-11-04T23:53:32.516957592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:53:32.517272 containerd[1638]: time="2025-11-04T23:53:32.517236986Z" level=info msg="received exit event sandbox_id:\"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" exit_status:137 exited_at:{seconds:1762300412 nanos:468139844}" Nov 4 23:53:32.517476 containerd[1638]: time="2025-11-04T23:53:32.517436857Z" level=info msg="TearDown network for sandbox \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" successfully" Nov 4 23:53:32.517574 containerd[1638]: time="2025-11-04T23:53:32.517555964Z" level=info msg="StopPodSandbox for \"e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2\" returns successfully" Nov 4 23:53:32.602254 kubelet[2832]: I1104 23:53:32.602182 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-run\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602254 kubelet[2832]: I1104 23:53:32.602240 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-net\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602254 kubelet[2832]: I1104 23:53:32.602264 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-kernel\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602505 kubelet[2832]: I1104 23:53:32.602287 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-cgroup\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602505 kubelet[2832]: I1104 23:53:32.602309 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-hostproc\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602505 kubelet[2832]: I1104 23:53:32.602337 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cni-path\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602505 kubelet[2832]: I1104 23:53:32.602353 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-lib-modules\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602505 kubelet[2832]: I1104 23:53:32.602361 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602505 kubelet[2832]: I1104 23:53:32.602357 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602676 kubelet[2832]: I1104 23:53:32.602378 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-config-path\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602676 kubelet[2832]: I1104 23:53:32.602446 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-hostproc" (OuterVolumeSpecName: "hostproc") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602676 kubelet[2832]: I1104 23:53:32.602447 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-bpf-maps\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602676 kubelet[2832]: I1104 23:53:32.602465 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602676 kubelet[2832]: I1104 23:53:32.602491 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cni-path" (OuterVolumeSpecName: "cni-path") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602800 kubelet[2832]: I1104 23:53:32.602511 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602800 kubelet[2832]: I1104 23:53:32.602518 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-hubble-tls\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602800 kubelet[2832]: I1104 23:53:32.602531 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602800 kubelet[2832]: I1104 23:53:32.602590 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswqc\" (UniqueName: \"kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-kube-api-access-lswqc\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602800 kubelet[2832]: I1104 23:53:32.602608 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602918 kubelet[2832]: I1104 23:53:32.602611 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-etc-cni-netd\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602918 kubelet[2832]: I1104 23:53:32.602633 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.602918 kubelet[2832]: I1104 23:53:32.602645 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-xtables-lock\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602918 kubelet[2832]: I1104 23:53:32.602696 2832 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd16a140-06b9-436e-af33-de26c18ef27a-clustermesh-secrets\") pod \"fd16a140-06b9-436e-af33-de26c18ef27a\" (UID: \"fd16a140-06b9-436e-af33-de26c18ef27a\") " Nov 4 23:53:32.602918 kubelet[2832]: I1104 23:53:32.602740 2832 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-86qvt\" (UniqueName: \"kubernetes.io/projected/17b8e2b6-209b-4eb5-b124-29c5d32cce55-kube-api-access-86qvt\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.602918 kubelet[2832]: I1104 23:53:32.602756 2832 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17b8e2b6-209b-4eb5-b124-29c5d32cce55-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602768 2832 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602780 2832 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602792 2832 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602925 2832 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602944 2832 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602956 2832 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602968 2832 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603056 kubelet[2832]: I1104 23:53:32.602982 2832 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603232 kubelet[2832]: I1104 23:53:32.602995 2832 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.603462 kubelet[2832]: I1104 23:53:32.603434 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:53:32.606758 kubelet[2832]: I1104 23:53:32.606720 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:53:32.606895 kubelet[2832]: I1104 23:53:32.606729 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:53:32.607274 kubelet[2832]: I1104 23:53:32.607233 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-kube-api-access-lswqc" (OuterVolumeSpecName: "kube-api-access-lswqc") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "kube-api-access-lswqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:53:32.607467 kubelet[2832]: I1104 23:53:32.607422 2832 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd16a140-06b9-436e-af33-de26c18ef27a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fd16a140-06b9-436e-af33-de26c18ef27a" (UID: "fd16a140-06b9-436e-af33-de26c18ef27a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:53:32.704236 kubelet[2832]: I1104 23:53:32.704077 2832 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd16a140-06b9-436e-af33-de26c18ef27a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.704236 kubelet[2832]: I1104 23:53:32.704130 2832 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.704236 kubelet[2832]: I1104 23:53:32.704144 2832 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lswqc\" (UniqueName: \"kubernetes.io/projected/fd16a140-06b9-436e-af33-de26c18ef27a-kube-api-access-lswqc\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.704236 kubelet[2832]: I1104 23:53:32.704158 2832 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd16a140-06b9-436e-af33-de26c18ef27a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.704236 kubelet[2832]: I1104 23:53:32.704169 2832 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd16a140-06b9-436e-af33-de26c18ef27a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:32.773624 systemd[1]: Removed slice kubepods-besteffort-pod17b8e2b6_209b_4eb5_b124_29c5d32cce55.slice - libcontainer container kubepods-besteffort-pod17b8e2b6_209b_4eb5_b124_29c5d32cce55.slice. Nov 4 23:53:32.775624 systemd[1]: Removed slice kubepods-burstable-podfd16a140_06b9_436e_af33_de26c18ef27a.slice - libcontainer container kubepods-burstable-podfd16a140_06b9_436e_af33_de26c18ef27a.slice. Nov 4 23:53:32.775775 systemd[1]: kubepods-burstable-podfd16a140_06b9_436e_af33_de26c18ef27a.slice: Consumed 7.512s CPU time, 127.9M memory peak, 4.6M read from disk, 13.3M written to disk. Nov 4 23:53:33.225571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4e07bdd62eec438f3f897d36e2376208ad16a795a1ca1131d75f9767c4f10c2-shm.mount: Deactivated successfully. Nov 4 23:53:33.225784 systemd[1]: var-lib-kubelet-pods-17b8e2b6\x2d209b\x2d4eb5\x2db124\x2d29c5d32cce55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86qvt.mount: Deactivated successfully. Nov 4 23:53:33.225917 systemd[1]: var-lib-kubelet-pods-fd16a140\x2d06b9\x2d436e\x2daf33\x2dde26c18ef27a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlswqc.mount: Deactivated successfully. Nov 4 23:53:33.226078 systemd[1]: var-lib-kubelet-pods-fd16a140\x2d06b9\x2d436e\x2daf33\x2dde26c18ef27a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 23:53:33.226202 systemd[1]: var-lib-kubelet-pods-fd16a140\x2d06b9\x2d436e\x2daf33\x2dde26c18ef27a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 23:53:33.282214 kubelet[2832]: I1104 23:53:33.282170 2832 scope.go:117] "RemoveContainer" containerID="e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2" Nov 4 23:53:33.284954 containerd[1638]: time="2025-11-04T23:53:33.284427130Z" level=info msg="RemoveContainer for \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\"" Nov 4 23:53:33.291238 containerd[1638]: time="2025-11-04T23:53:33.291194352Z" level=info msg="RemoveContainer for \"e721a70b8d4438e573126e18c51fb35d5678890d0eb954e68edb621efa50e7b2\" returns successfully" Nov 4 23:53:33.291648 kubelet[2832]: I1104 23:53:33.291576 2832 scope.go:117] "RemoveContainer" containerID="3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9" Nov 4 23:53:33.291800 update_engine[1625]: I20251104 23:53:33.291649 1625 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 4 23:53:33.291800 update_engine[1625]: I20251104 23:53:33.291699 1625 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 4 23:53:33.292893 update_engine[1625]: I20251104 23:53:33.291955 1625 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 4 23:53:33.292893 update_engine[1625]: I20251104 23:53:33.292573 1625 omaha_request_params.cc:62] Current group set to alpha Nov 4 23:53:33.293135 update_engine[1625]: I20251104 23:53:33.293103 1625 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 4 23:53:33.293438 containerd[1638]: time="2025-11-04T23:53:33.293411933Z" level=info msg="RemoveContainer for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\"" Nov 4 23:53:33.293954 update_engine[1625]: I20251104 23:53:33.293283 1625 update_attempter.cc:643] Scheduling an action processor start. Nov 4 23:53:33.293983 update_engine[1625]: I20251104 23:53:33.293947 1625 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 23:53:33.299894 locksmithd[1656]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 4 23:53:33.300883 update_engine[1625]: I20251104 23:53:33.300829 1625 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 4 23:53:33.301160 update_engine[1625]: I20251104 23:53:33.301112 1625 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 23:53:33.301160 update_engine[1625]: I20251104 23:53:33.301129 1625 omaha_request_action.cc:272] Request: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: Nov 4 23:53:33.301160 update_engine[1625]: I20251104 23:53:33.301139 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:53:33.302832 containerd[1638]: time="2025-11-04T23:53:33.302723461Z" level=info msg="RemoveContainer for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" returns successfully" Nov 4 23:53:33.302990 kubelet[2832]: I1104 23:53:33.302954 2832 scope.go:117] "RemoveContainer" containerID="bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81" Nov 4 23:53:33.304295 containerd[1638]: time="2025-11-04T23:53:33.304255264Z" level=info msg="RemoveContainer for \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\"" Nov 4 23:53:33.306364 update_engine[1625]: I20251104 23:53:33.306306 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:53:33.307114 update_engine[1625]: I20251104 23:53:33.307050 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:53:33.309574 containerd[1638]: time="2025-11-04T23:53:33.309525539Z" level=info msg="RemoveContainer for \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" returns successfully" Nov 4 23:53:33.310009 kubelet[2832]: I1104 23:53:33.309972 2832 scope.go:117] "RemoveContainer" containerID="424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586" Nov 4 23:53:33.313085 containerd[1638]: time="2025-11-04T23:53:33.312843400Z" level=info msg="RemoveContainer for \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\"" Nov 4 23:53:33.317716 containerd[1638]: time="2025-11-04T23:53:33.317653548Z" level=info msg="RemoveContainer for \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" returns successfully" Nov 4 23:53:33.318018 kubelet[2832]: I1104 23:53:33.317978 2832 scope.go:117] "RemoveContainer" containerID="5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023" Nov 4 23:53:33.320793 containerd[1638]: time="2025-11-04T23:53:33.320760997Z" level=info msg="RemoveContainer for \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\"" Nov 4 23:53:33.325175 containerd[1638]: time="2025-11-04T23:53:33.325136075Z" level=info msg="RemoveContainer for \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" returns successfully" Nov 4 23:53:33.325341 kubelet[2832]: I1104 23:53:33.325305 2832 scope.go:117] "RemoveContainer" containerID="33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6" Nov 4 23:53:33.326010 update_engine[1625]: E20251104 23:53:33.325837 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:53:33.326010 update_engine[1625]: I20251104 23:53:33.325976 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 4 23:53:33.326792 containerd[1638]: time="2025-11-04T23:53:33.326759192Z" level=info msg="RemoveContainer for \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\"" Nov 4 23:53:33.330335 containerd[1638]: time="2025-11-04T23:53:33.330290400Z" level=info msg="RemoveContainer for \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" returns successfully" Nov 4 23:53:33.330522 kubelet[2832]: I1104 23:53:33.330483 2832 scope.go:117] "RemoveContainer" containerID="3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9" Nov 4 23:53:33.330926 containerd[1638]: time="2025-11-04T23:53:33.330856661Z" level=error msg="ContainerStatus for \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\": not found" Nov 4 23:53:33.331118 kubelet[2832]: E1104 23:53:33.331087 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\": not found" containerID="3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9" Nov 4 23:53:33.331158 kubelet[2832]: I1104 23:53:33.331124 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9"} err="failed to get container status \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bc183be700ce2e975bd233d709d7fb1f0482f9420d4d02497476df8954a8ec9\": not found" Nov 4 23:53:33.331197 kubelet[2832]: I1104 23:53:33.331161 2832 scope.go:117] "RemoveContainer" containerID="bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81" Nov 4 23:53:33.331526 containerd[1638]: time="2025-11-04T23:53:33.331423262Z" level=error msg="ContainerStatus for \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\": not found" Nov 4 23:53:33.331758 kubelet[2832]: E1104 23:53:33.331731 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\": not found" containerID="bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81" Nov 4 23:53:33.331805 kubelet[2832]: I1104 23:53:33.331763 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81"} err="failed to get container status \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf8a3ccf77614a4159779c98cbced008b72356a2ba31b96d6bfd3d4e18444f81\": not found" Nov 4 23:53:33.331805 kubelet[2832]: I1104 23:53:33.331783 2832 scope.go:117] "RemoveContainer" containerID="424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586" Nov 4 23:53:33.331977 containerd[1638]: time="2025-11-04T23:53:33.331934077Z" level=error msg="ContainerStatus for \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\": not found" Nov 4 23:53:33.332146 kubelet[2832]: E1104 23:53:33.332115 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\": not found" containerID="424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586" Nov 4 23:53:33.332197 kubelet[2832]: I1104 23:53:33.332157 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586"} err="failed to get container status \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\": rpc error: code = NotFound desc = an error occurred when try to find container \"424c6ec62c7923ebd108d94aaeb446e7ca53aada7b04f973e74c5375f7a54586\": not found" Nov 4 23:53:33.332240 kubelet[2832]: I1104 23:53:33.332194 2832 scope.go:117] "RemoveContainer" containerID="5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023" Nov 4 23:53:33.332467 containerd[1638]: time="2025-11-04T23:53:33.332429922Z" level=error msg="ContainerStatus for \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\": not found" Nov 4 23:53:33.332616 kubelet[2832]: E1104 23:53:33.332591 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\": not found" containerID="5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023" Nov 4 23:53:33.332650 kubelet[2832]: I1104 23:53:33.332616 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023"} err="failed to get container status \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b155834b2825098947cc293867cfc21973e24510348dde1d3484bf6da82e023\": not found" Nov 4 23:53:33.332650 kubelet[2832]: I1104 23:53:33.332633 2832 scope.go:117] "RemoveContainer" containerID="33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6" Nov 4 23:53:33.332815 containerd[1638]: time="2025-11-04T23:53:33.332785020Z" level=error msg="ContainerStatus for \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\": not found" Nov 4 23:53:33.332908 kubelet[2832]: E1104 23:53:33.332878 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\": not found" containerID="33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6" Nov 4 23:53:33.332908 kubelet[2832]: I1104 23:53:33.332901 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6"} err="failed to get container status \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"33364506554296421de3be7b1390d34188debaacf39d0752b85fe4c3ff4814f6\": not found" Nov 4 23:53:34.064796 sshd[4456]: Connection closed by 10.0.0.1 port 46054 Nov 4 23:53:34.065484 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:34.078711 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:46054.service: Deactivated successfully. Nov 4 23:53:34.080777 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:53:34.081732 systemd-logind[1623]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:53:34.084862 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:42358.service - OpenSSH per-connection server daemon (10.0.0.1:42358). Nov 4 23:53:34.085495 systemd-logind[1623]: Removed session 25. Nov 4 23:53:34.168300 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 42358 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:34.170013 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:34.175925 systemd-logind[1623]: New session 26 of user core. Nov 4 23:53:34.193849 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 23:53:34.691266 sshd[4617]: Connection closed by 10.0.0.1 port 42358 Nov 4 23:53:34.692842 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:34.706029 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:42358.service: Deactivated successfully. Nov 4 23:53:34.711030 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 23:53:34.714610 systemd-logind[1623]: Session 26 logged out. Waiting for processes to exit. Nov 4 23:53:34.721109 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:42360.service - OpenSSH per-connection server daemon (10.0.0.1:42360). Nov 4 23:53:34.724678 systemd-logind[1623]: Removed session 26. Nov 4 23:53:34.737572 systemd[1]: Created slice kubepods-burstable-pod0ca250e3_5909_418a_897c_11af3fbbe13d.slice - libcontainer container kubepods-burstable-pod0ca250e3_5909_418a_897c_11af3fbbe13d.slice. Nov 4 23:53:34.770572 kubelet[2832]: I1104 23:53:34.769887 2832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b8e2b6-209b-4eb5-b124-29c5d32cce55" path="/var/lib/kubelet/pods/17b8e2b6-209b-4eb5-b124-29c5d32cce55/volumes" Nov 4 23:53:34.770572 kubelet[2832]: I1104 23:53:34.770468 2832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd16a140-06b9-436e-af33-de26c18ef27a" path="/var/lib/kubelet/pods/fd16a140-06b9-436e-af33-de26c18ef27a/volumes" Nov 4 23:53:34.791911 sshd[4631]: Accepted publickey for core from 10.0.0.1 port 42360 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:34.793025 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:34.798075 systemd-logind[1623]: New session 27 of user core. Nov 4 23:53:34.810711 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 4 23:53:34.817662 kubelet[2832]: I1104 23:53:34.817617 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-bpf-maps\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.817662 kubelet[2832]: I1104 23:53:34.817657 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-hostproc\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.817817 kubelet[2832]: I1104 23:53:34.817683 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0ca250e3-5909-418a-897c-11af3fbbe13d-clustermesh-secrets\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.817817 kubelet[2832]: I1104 23:53:34.817766 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-host-proc-sys-kernel\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.817817 kubelet[2832]: I1104 23:53:34.817812 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwq9\" (UniqueName: \"kubernetes.io/projected/0ca250e3-5909-418a-897c-11af3fbbe13d-kube-api-access-mwwq9\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.817939 kubelet[2832]: I1104 23:53:34.817864 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-etc-cni-netd\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.817939 kubelet[2832]: I1104 23:53:34.817917 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-lib-modules\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818010 kubelet[2832]: I1104 23:53:34.817939 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0ca250e3-5909-418a-897c-11af3fbbe13d-hubble-tls\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818010 kubelet[2832]: I1104 23:53:34.817984 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-cilium-cgroup\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818087 kubelet[2832]: I1104 23:53:34.818008 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0ca250e3-5909-418a-897c-11af3fbbe13d-cilium-ipsec-secrets\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818087 kubelet[2832]: I1104 23:53:34.818030 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-cni-path\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818164 kubelet[2832]: I1104 23:53:34.818078 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ca250e3-5909-418a-897c-11af3fbbe13d-cilium-config-path\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818164 kubelet[2832]: I1104 23:53:34.818151 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-cilium-run\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818245 kubelet[2832]: I1104 23:53:34.818177 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-xtables-lock\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.818245 kubelet[2832]: I1104 23:53:34.818222 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0ca250e3-5909-418a-897c-11af3fbbe13d-host-proc-sys-net\") pod \"cilium-hv7qd\" (UID: \"0ca250e3-5909-418a-897c-11af3fbbe13d\") " pod="kube-system/cilium-hv7qd" Nov 4 23:53:34.863685 sshd[4634]: Connection closed by 10.0.0.1 port 42360 Nov 4 23:53:34.864214 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:34.880613 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:42360.service: Deactivated successfully. Nov 4 23:53:34.882886 systemd[1]: session-27.scope: Deactivated successfully. Nov 4 23:53:34.883922 systemd-logind[1623]: Session 27 logged out. Waiting for processes to exit. Nov 4 23:53:34.885804 systemd-logind[1623]: Removed session 27. Nov 4 23:53:34.887365 systemd[1]: Started sshd@27-10.0.0.67:22-10.0.0.1:42376.service - OpenSSH per-connection server daemon (10.0.0.1:42376). Nov 4 23:53:34.955150 sshd[4642]: Accepted publickey for core from 10.0.0.1 port 42376 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:53:34.957403 sshd-session[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:34.963169 systemd-logind[1623]: New session 28 of user core. Nov 4 23:53:34.985735 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 4 23:53:35.342138 kubelet[2832]: E1104 23:53:35.342083 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:35.342757 containerd[1638]: time="2025-11-04T23:53:35.342691330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hv7qd,Uid:0ca250e3-5909-418a-897c-11af3fbbe13d,Namespace:kube-system,Attempt:0,}" Nov 4 23:53:35.503032 containerd[1638]: time="2025-11-04T23:53:35.502962940Z" level=info msg="connecting to shim ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7" address="unix:///run/containerd/s/43f3f6fa7ebb2cde0da959268022ab4d4761a406201f2ddcbf63b5e46b007d6f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:35.536798 systemd[1]: Started cri-containerd-ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7.scope - libcontainer container ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7. Nov 4 23:53:35.566236 containerd[1638]: time="2025-11-04T23:53:35.566169511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hv7qd,Uid:0ca250e3-5909-418a-897c-11af3fbbe13d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\"" Nov 4 23:53:35.567032 kubelet[2832]: E1104 23:53:35.566994 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:35.573823 containerd[1638]: time="2025-11-04T23:53:35.573778794Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:53:35.620406 containerd[1638]: time="2025-11-04T23:53:35.620294142Z" level=info msg="Container 2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:35.627231 containerd[1638]: time="2025-11-04T23:53:35.627163874Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\"" Nov 4 23:53:35.627798 containerd[1638]: time="2025-11-04T23:53:35.627769970Z" level=info msg="StartContainer for \"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\"" Nov 4 23:53:35.628651 containerd[1638]: time="2025-11-04T23:53:35.628590685Z" level=info msg="connecting to shim 2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332" address="unix:///run/containerd/s/43f3f6fa7ebb2cde0da959268022ab4d4761a406201f2ddcbf63b5e46b007d6f" protocol=ttrpc version=3 Nov 4 23:53:35.651687 systemd[1]: Started cri-containerd-2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332.scope - libcontainer container 2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332. Nov 4 23:53:35.683469 containerd[1638]: time="2025-11-04T23:53:35.683422203Z" level=info msg="StartContainer for \"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\" returns successfully" Nov 4 23:53:35.694774 systemd[1]: cri-containerd-2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332.scope: Deactivated successfully. Nov 4 23:53:35.697281 containerd[1638]: time="2025-11-04T23:53:35.697215260Z" level=info msg="received exit event container_id:\"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\" id:\"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\" pid:4718 exited_at:{seconds:1762300415 nanos:696867988}" Nov 4 23:53:35.697518 containerd[1638]: time="2025-11-04T23:53:35.697276577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\" id:\"2533742e30b5f24c3025f6786a02fbaed09286ab43574506711143be6f1ff332\" pid:4718 exited_at:{seconds:1762300415 nanos:696867988}" Nov 4 23:53:36.299465 kubelet[2832]: E1104 23:53:36.299422 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:36.303717 containerd[1638]: time="2025-11-04T23:53:36.303664671Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:53:36.324116 containerd[1638]: time="2025-11-04T23:53:36.323914705Z" level=info msg="Container dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:36.331329 containerd[1638]: time="2025-11-04T23:53:36.331247468Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\"" Nov 4 23:53:36.331895 containerd[1638]: time="2025-11-04T23:53:36.331830209Z" level=info msg="StartContainer for \"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\"" Nov 4 23:53:36.332959 containerd[1638]: time="2025-11-04T23:53:36.332916260Z" level=info msg="connecting to shim dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c" address="unix:///run/containerd/s/43f3f6fa7ebb2cde0da959268022ab4d4761a406201f2ddcbf63b5e46b007d6f" protocol=ttrpc version=3 Nov 4 23:53:36.356726 systemd[1]: Started cri-containerd-dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c.scope - libcontainer container dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c. Nov 4 23:53:36.388512 containerd[1638]: time="2025-11-04T23:53:36.388456870Z" level=info msg="StartContainer for \"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\" returns successfully" Nov 4 23:53:36.395937 systemd[1]: cri-containerd-dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c.scope: Deactivated successfully. Nov 4 23:53:36.396596 containerd[1638]: time="2025-11-04T23:53:36.396513212Z" level=info msg="received exit event container_id:\"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\" id:\"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\" pid:4765 exited_at:{seconds:1762300416 nanos:396231755}" Nov 4 23:53:36.396775 containerd[1638]: time="2025-11-04T23:53:36.396747719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\" id:\"dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c\" pid:4765 exited_at:{seconds:1762300416 nanos:396231755}" Nov 4 23:53:36.420447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dea0ac0776476e7e093b717f58bf27f1179c20d303f599b69a02b2bd010f2d5c-rootfs.mount: Deactivated successfully. Nov 4 23:53:36.830047 kubelet[2832]: E1104 23:53:36.829947 2832 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 23:53:37.303716 kubelet[2832]: E1104 23:53:37.303556 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:37.308989 containerd[1638]: time="2025-11-04T23:53:37.308906488Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:53:37.325937 containerd[1638]: time="2025-11-04T23:53:37.325441292Z" level=info msg="Container 428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:37.340583 containerd[1638]: time="2025-11-04T23:53:37.340489622Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\"" Nov 4 23:53:37.341227 containerd[1638]: time="2025-11-04T23:53:37.341146284Z" level=info msg="StartContainer for \"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\"" Nov 4 23:53:37.342694 containerd[1638]: time="2025-11-04T23:53:37.342648347Z" level=info msg="connecting to shim 428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39" address="unix:///run/containerd/s/43f3f6fa7ebb2cde0da959268022ab4d4761a406201f2ddcbf63b5e46b007d6f" protocol=ttrpc version=3 Nov 4 23:53:37.376871 systemd[1]: Started cri-containerd-428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39.scope - libcontainer container 428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39. Nov 4 23:53:37.436321 systemd[1]: cri-containerd-428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39.scope: Deactivated successfully. Nov 4 23:53:37.437715 containerd[1638]: time="2025-11-04T23:53:37.437650079Z" level=info msg="received exit event container_id:\"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\" id:\"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\" pid:4808 exited_at:{seconds:1762300417 nanos:437424178}" Nov 4 23:53:37.438123 containerd[1638]: time="2025-11-04T23:53:37.437842786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\" id:\"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\" pid:4808 exited_at:{seconds:1762300417 nanos:437424178}" Nov 4 23:53:37.438656 containerd[1638]: time="2025-11-04T23:53:37.438615339Z" level=info msg="StartContainer for \"428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39\" returns successfully" Nov 4 23:53:37.468056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-428b5c53e38c863575de7db59c73eb9e07f0bc92d617109cecaf2986700e7c39-rootfs.mount: Deactivated successfully. Nov 4 23:53:38.309245 kubelet[2832]: E1104 23:53:38.309206 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:38.317015 containerd[1638]: time="2025-11-04T23:53:38.316973568Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:53:38.326619 containerd[1638]: time="2025-11-04T23:53:38.326530206Z" level=info msg="Container 7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:38.334622 containerd[1638]: time="2025-11-04T23:53:38.334531366Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\"" Nov 4 23:53:38.335183 containerd[1638]: time="2025-11-04T23:53:38.335121512Z" level=info msg="StartContainer for \"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\"" Nov 4 23:53:38.336228 containerd[1638]: time="2025-11-04T23:53:38.336203985Z" level=info msg="connecting to shim 7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2" address="unix:///run/containerd/s/43f3f6fa7ebb2cde0da959268022ab4d4761a406201f2ddcbf63b5e46b007d6f" protocol=ttrpc version=3 Nov 4 23:53:38.367857 systemd[1]: Started cri-containerd-7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2.scope - libcontainer container 7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2. Nov 4 23:53:38.400590 systemd[1]: cri-containerd-7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2.scope: Deactivated successfully. Nov 4 23:53:38.401581 containerd[1638]: time="2025-11-04T23:53:38.401506431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\" id:\"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\" pid:4848 exited_at:{seconds:1762300418 nanos:400826575}" Nov 4 23:53:38.401953 containerd[1638]: time="2025-11-04T23:53:38.401909650Z" level=info msg="received exit event container_id:\"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\" id:\"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\" pid:4848 exited_at:{seconds:1762300418 nanos:400826575}" Nov 4 23:53:38.410679 containerd[1638]: time="2025-11-04T23:53:38.410626706Z" level=info msg="StartContainer for \"7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2\" returns successfully" Nov 4 23:53:38.427047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b36ee33656d018a799399f8da6b91361bb141c046e08b5be1e2a7997d169ed2-rootfs.mount: Deactivated successfully. Nov 4 23:53:38.976419 kubelet[2832]: I1104 23:53:38.976340 2832 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-04T23:53:38Z","lastTransitionTime":"2025-11-04T23:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 4 23:53:39.316039 kubelet[2832]: E1104 23:53:39.315977 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:39.321849 containerd[1638]: time="2025-11-04T23:53:39.321785532Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:53:39.337405 containerd[1638]: time="2025-11-04T23:53:39.337336639Z" level=info msg="Container 1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:39.344260 containerd[1638]: time="2025-11-04T23:53:39.344221640Z" level=info msg="CreateContainer within sandbox \"ed447e4d76a02e6b610d87256499d47e94abf6199964419f21b39ba6544f49b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\"" Nov 4 23:53:39.344944 containerd[1638]: time="2025-11-04T23:53:39.344749015Z" level=info msg="StartContainer for \"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\"" Nov 4 23:53:39.345757 containerd[1638]: time="2025-11-04T23:53:39.345721078Z" level=info msg="connecting to shim 1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7" address="unix:///run/containerd/s/43f3f6fa7ebb2cde0da959268022ab4d4761a406201f2ddcbf63b5e46b007d6f" protocol=ttrpc version=3 Nov 4 23:53:39.374683 systemd[1]: Started cri-containerd-1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7.scope - libcontainer container 1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7. Nov 4 23:53:39.417725 containerd[1638]: time="2025-11-04T23:53:39.417665620Z" level=info msg="StartContainer for \"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" returns successfully" Nov 4 23:53:39.490769 containerd[1638]: time="2025-11-04T23:53:39.490715718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" id:\"7c12cdfea2659ed38770c9ece7dac5e5a2637f396280cc38a395f3c1c28c6e80\" pid:4916 exited_at:{seconds:1762300419 nanos:490358227}" Nov 4 23:53:39.765555 kubelet[2832]: E1104 23:53:39.765392 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:39.888584 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 4 23:53:40.321743 kubelet[2832]: E1104 23:53:40.321684 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:40.346008 kubelet[2832]: I1104 23:53:40.345940 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hv7qd" podStartSLOduration=6.345909384 podStartE2EDuration="6.345909384s" podCreationTimestamp="2025-11-04 23:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:53:40.344761056 +0000 UTC m=+103.718835004" watchObservedRunningTime="2025-11-04 23:53:40.345909384 +0000 UTC m=+103.719983312" Nov 4 23:53:41.343596 kubelet[2832]: E1104 23:53:41.343500 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:41.775172 containerd[1638]: time="2025-11-04T23:53:41.774818083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" id:\"c364e1be791a93a35876bf51c8ecc21a3d962ac82a39ee884eaa41076241cd4b\" pid:5107 exit_status:1 exited_at:{seconds:1762300421 nanos:774405928}" Nov 4 23:53:42.765900 kubelet[2832]: E1104 23:53:42.765859 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:43.180285 systemd-networkd[1532]: lxc_health: Link UP Nov 4 23:53:43.182858 systemd-networkd[1532]: lxc_health: Gained carrier Nov 4 23:53:43.294680 update_engine[1625]: I20251104 23:53:43.294605 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:53:43.295679 update_engine[1625]: I20251104 23:53:43.295228 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:53:43.295679 update_engine[1625]: I20251104 23:53:43.295642 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:53:43.309395 update_engine[1625]: E20251104 23:53:43.309367 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:53:43.309556 update_engine[1625]: I20251104 23:53:43.309520 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 4 23:53:43.344579 kubelet[2832]: E1104 23:53:43.344001 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:43.899816 containerd[1638]: time="2025-11-04T23:53:43.899430683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" id:\"ed952549b1175edf8d5d7d6a0ccd25bf41f665370fca15ee45eaefa49fbb07bb\" pid:5477 exited_at:{seconds:1762300423 nanos:898909601}" Nov 4 23:53:44.300916 systemd-networkd[1532]: lxc_health: Gained IPv6LL Nov 4 23:53:44.334311 kubelet[2832]: E1104 23:53:44.334243 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:45.333163 kubelet[2832]: E1104 23:53:45.333097 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:46.009076 containerd[1638]: time="2025-11-04T23:53:46.009013668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" id:\"8b3e5067dc35dbfb104feb901596894ad288875033cb73aba6efe70a78d06908\" pid:5515 exited_at:{seconds:1762300426 nanos:7879980}" Nov 4 23:53:48.129469 containerd[1638]: time="2025-11-04T23:53:48.129380457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" id:\"5b42af628e0b01d8ba84c51493fbf226b0fcb1f33057481d6bde5a3bf58822f3\" pid:5547 exited_at:{seconds:1762300428 nanos:128832474}" Nov 4 23:53:50.223968 containerd[1638]: time="2025-11-04T23:53:50.223918281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a12f366d35e5b31688e9df1f1744ed1f91353369c6fccdb1b53e2eaf75396d7\" id:\"cedf97b8a5c8fe11f6639a8cfd948f282e9ab78bf0e37dc3e14b8958712e9f61\" pid:5571 exited_at:{seconds:1762300430 nanos:223571360}" Nov 4 23:53:50.243363 sshd[4650]: Connection closed by 10.0.0.1 port 42376 Nov 4 23:53:50.242580 sshd-session[4642]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:50.247778 systemd[1]: sshd@27-10.0.0.67:22-10.0.0.1:42376.service: Deactivated successfully. Nov 4 23:53:50.250365 systemd[1]: session-28.scope: Deactivated successfully. Nov 4 23:53:50.251445 systemd-logind[1623]: Session 28 logged out. Waiting for processes to exit. Nov 4 23:53:50.253332 systemd-logind[1623]: Removed session 28.