Nov 8 00:22:00.005824 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:22:00.005849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:22:00.005863 kernel: BIOS-provided physical RAM map: Nov 8 00:22:00.005871 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:22:00.005878 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:22:00.005886 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:22:00.005895 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 8 00:22:00.005903 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 8 00:22:00.005911 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:22:00.005922 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:22:00.005929 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:22:00.005937 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:22:00.005949 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:22:00.005957 kernel: NX (Execute Disable) protection: active Nov 8 00:22:00.005967 kernel: APIC: Static calls initialized Nov 8 00:22:00.005981 kernel: SMBIOS 2.8 present. Nov 8 00:22:00.005990 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 8 00:22:00.005998 kernel: Hypervisor detected: KVM Nov 8 00:22:00.006006 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:22:00.006014 kernel: kvm-clock: using sched offset of 2967050239 cycles Nov 8 00:22:00.006024 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:22:00.006033 kernel: tsc: Detected 2794.748 MHz processor Nov 8 00:22:00.006041 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:22:00.006050 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:22:00.006063 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 8 00:22:00.006073 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:22:00.006084 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:22:00.006093 kernel: Using GB pages for direct mapping Nov 8 00:22:00.006102 kernel: ACPI: Early table checksum verification disabled Nov 8 00:22:00.006110 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 8 00:22:00.006119 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006128 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006136 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006148 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 8 00:22:00.006156 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006165 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006174 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006182 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:22:00.006191 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 8 00:22:00.006200 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 8 00:22:00.006212 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 8 00:22:00.006224 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 8 00:22:00.006252 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 8 00:22:00.006261 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 8 00:22:00.006270 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 8 00:22:00.006279 kernel: No NUMA configuration found Nov 8 00:22:00.006288 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 8 00:22:00.006300 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 8 00:22:00.006309 kernel: Zone ranges: Nov 8 00:22:00.006318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:22:00.006327 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 8 00:22:00.006336 kernel: Normal empty Nov 8 00:22:00.006345 kernel: Movable zone start for each node Nov 8 00:22:00.006354 kernel: Early memory node ranges Nov 8 00:22:00.006363 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:22:00.006372 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 8 00:22:00.006381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 8 00:22:00.006393 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:22:00.006405 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:22:00.006414 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:22:00.006423 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:22:00.006432 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:22:00.006441 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:22:00.006450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:22:00.006459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:22:00.006468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:22:00.006480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:22:00.006489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:22:00.006498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:22:00.006507 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:22:00.006516 kernel: TSC deadline timer available Nov 8 00:22:00.006525 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:22:00.006534 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:22:00.006543 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:22:00.006554 kernel: kvm-guest: setup PV sched yield Nov 8 00:22:00.006566 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:22:00.006575 kernel: Booting paravirtualized kernel on KVM Nov 8 00:22:00.006585 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:22:00.006594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:22:00.006603 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:22:00.006612 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:22:00.006620 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:22:00.006629 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:22:00.006638 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:22:00.006651 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:22:00.006660 kernel: random: crng init done Nov 8 00:22:00.006669 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:22:00.006678 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:22:00.006687 kernel: Fallback order for Node 0: 0 Nov 8 00:22:00.006696 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 8 00:22:00.006705 kernel: Policy zone: DMA32 Nov 8 00:22:00.006714 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:22:00.006726 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Nov 8 00:22:00.006735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:22:00.006744 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:22:00.006753 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:22:00.006762 kernel: Dynamic Preempt: voluntary Nov 8 00:22:00.006771 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:22:00.006781 kernel: rcu: RCU event tracing is enabled. Nov 8 00:22:00.006798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:22:00.006807 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:22:00.006820 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:22:00.006829 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:22:00.006838 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:22:00.006847 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:22:00.006858 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:22:00.006868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:22:00.006876 kernel: Console: colour VGA+ 80x25 Nov 8 00:22:00.006885 kernel: printk: console [ttyS0] enabled Nov 8 00:22:00.006894 kernel: ACPI: Core revision 20230628 Nov 8 00:22:00.006904 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:22:00.006915 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:22:00.006924 kernel: x2apic enabled Nov 8 00:22:00.006933 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:22:00.006942 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:22:00.006952 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:22:00.006961 kernel: kvm-guest: setup PV IPIs Nov 8 00:22:00.006970 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:22:00.006991 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:22:00.007000 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 8 00:22:00.007010 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:22:00.007019 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:22:00.007031 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:22:00.007041 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:22:00.007050 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:22:00.007060 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:22:00.007070 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:22:00.007082 kernel: active return thunk: retbleed_return_thunk Nov 8 00:22:00.007091 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:22:00.007104 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:22:00.007113 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:22:00.007123 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:22:00.007133 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:22:00.007143 kernel: active return thunk: srso_return_thunk Nov 8 00:22:00.007152 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:22:00.007165 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:22:00.007174 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:22:00.007183 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:22:00.007193 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:22:00.007203 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:22:00.007212 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:22:00.007222 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:22:00.007242 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:22:00.007252 kernel: landlock: Up and running. Nov 8 00:22:00.007264 kernel: SELinux: Initializing. Nov 8 00:22:00.007274 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:22:00.007283 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:22:00.007293 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:22:00.007303 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:22:00.007312 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:22:00.007322 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:22:00.007332 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:22:00.007344 kernel: ... version: 0 Nov 8 00:22:00.007356 kernel: ... bit width: 48 Nov 8 00:22:00.007366 kernel: ... generic registers: 6 Nov 8 00:22:00.007376 kernel: ... value mask: 0000ffffffffffff Nov 8 00:22:00.007385 kernel: ... max period: 00007fffffffffff Nov 8 00:22:00.007394 kernel: ... fixed-purpose events: 0 Nov 8 00:22:00.007404 kernel: ... event mask: 000000000000003f Nov 8 00:22:00.007413 kernel: signal: max sigframe size: 1776 Nov 8 00:22:00.007423 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:22:00.007433 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:22:00.007446 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:22:00.007455 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:22:00.007464 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:22:00.007474 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:22:00.007483 kernel: smpboot: Max logical packages: 1 Nov 8 00:22:00.007493 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 8 00:22:00.007502 kernel: devtmpfs: initialized Nov 8 00:22:00.007512 kernel: x86/mm: Memory block size: 128MB Nov 8 00:22:00.007522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:22:00.007534 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:22:00.007543 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:22:00.007553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:22:00.007562 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:22:00.007572 kernel: audit: type=2000 audit(1762561318.654:1): state=initialized audit_enabled=0 res=1 Nov 8 00:22:00.007581 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:22:00.007591 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:22:00.007600 kernel: cpuidle: using governor menu Nov 8 00:22:00.007610 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:22:00.007622 kernel: dca service started, version 1.12.1 Nov 8 00:22:00.007632 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:22:00.007641 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:22:00.007651 kernel: PCI: Using configuration type 1 for base access Nov 8 00:22:00.007661 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:22:00.007670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:22:00.007680 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:22:00.007689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:22:00.007699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:22:00.007711 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:22:00.007720 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:22:00.007730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:22:00.007739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:22:00.007749 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:22:00.007758 kernel: ACPI: Interpreter enabled Nov 8 00:22:00.007768 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:22:00.007777 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:22:00.007794 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:22:00.007807 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:22:00.007816 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:22:00.007826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:22:00.008075 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:22:00.008287 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:22:00.008437 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:22:00.008449 kernel: PCI host bridge to bus 0000:00 Nov 8 00:22:00.008617 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:22:00.008758 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:22:00.008901 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:22:00.009037 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:22:00.009193 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:22:00.009446 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:22:00.009576 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:22:00.009767 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:22:00.009937 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:22:00.010080 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:22:00.010223 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:22:00.010395 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:22:00.010633 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:22:00.010806 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:22:00.010957 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 00:22:00.011099 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:22:00.011265 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:22:00.011437 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:22:00.011581 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:22:00.011725 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:22:00.011916 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:22:00.012171 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:22:00.012406 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 8 00:22:00.012632 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:22:00.012779 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 8 00:22:00.012933 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:22:00.013095 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:22:00.013260 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:22:00.013428 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:22:00.013571 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 8 00:22:00.013713 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 8 00:22:00.013974 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:22:00.014124 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:22:00.014136 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:22:00.014152 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:22:00.014162 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:22:00.014171 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:22:00.014181 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:22:00.014190 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:22:00.014199 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:22:00.014209 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:22:00.014219 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:22:00.014228 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:22:00.014257 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:22:00.014266 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:22:00.014276 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:22:00.014286 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:22:00.014295 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:22:00.014305 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:22:00.014315 kernel: iommu: Default domain type: Translated Nov 8 00:22:00.014325 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:22:00.014334 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:22:00.014347 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:22:00.014356 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:22:00.014366 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 8 00:22:00.014510 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:22:00.014650 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:22:00.014850 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:22:00.014864 kernel: vgaarb: loaded Nov 8 00:22:00.014874 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:22:00.014888 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:22:00.014898 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:22:00.014908 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:22:00.014917 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:22:00.014927 kernel: pnp: PnP ACPI init Nov 8 00:22:00.015101 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:22:00.015115 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:22:00.015124 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:22:00.015134 kernel: NET: Registered PF_INET protocol family Nov 8 00:22:00.015149 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:22:00.015160 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:22:00.015170 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:22:00.015182 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:22:00.015193 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:22:00.015205 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:22:00.015214 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:22:00.015224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:22:00.015252 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:22:00.015261 kernel: NET: Registered PF_XDP protocol family Nov 8 00:22:00.015399 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:22:00.015531 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:22:00.015661 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:22:00.015801 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:22:00.015933 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:22:00.016064 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:22:00.016076 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:22:00.016090 kernel: Initialise system trusted keyrings Nov 8 00:22:00.016101 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:22:00.016112 kernel: Key type asymmetric registered Nov 8 00:22:00.016122 kernel: Asymmetric key parser 'x509' registered Nov 8 00:22:00.016131 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:22:00.016141 kernel: io scheduler mq-deadline registered Nov 8 00:22:00.016151 kernel: io scheduler kyber registered Nov 8 00:22:00.016160 kernel: io scheduler bfq registered Nov 8 00:22:00.016169 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:22:00.016182 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:22:00.016192 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:22:00.016202 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:22:00.016211 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:22:00.016221 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:22:00.016293 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:22:00.016303 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:22:00.016313 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:22:00.016478 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:22:00.016497 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:22:00.016628 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:22:00.016760 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:21:59 UTC (1762561319) Nov 8 00:22:00.016902 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:22:00.016915 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:22:00.016924 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:22:00.016934 kernel: Segment Routing with IPv6 Nov 8 00:22:00.016943 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:22:00.016957 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:22:00.016967 kernel: Key type dns_resolver registered Nov 8 00:22:00.016976 kernel: IPI shorthand broadcast: enabled Nov 8 00:22:00.016986 kernel: sched_clock: Marking stable (926005085, 199275446)->(1185559437, -60278906) Nov 8 00:22:00.016995 kernel: registered taskstats version 1 Nov 8 00:22:00.017005 kernel: Loading compiled-in X.509 certificates Nov 8 00:22:00.017015 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:22:00.017024 kernel: Key type .fscrypt registered Nov 8 00:22:00.017033 kernel: Key type fscrypt-provisioning registered Nov 8 00:22:00.017046 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:22:00.017055 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:22:00.017065 kernel: ima: No architecture policies found Nov 8 00:22:00.017074 kernel: clk: Disabling unused clocks Nov 8 00:22:00.017084 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:22:00.017093 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:22:00.017103 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:22:00.017113 kernel: Run /init as init process Nov 8 00:22:00.017124 kernel: with arguments: Nov 8 00:22:00.017139 kernel: /init Nov 8 00:22:00.017150 kernel: with environment: Nov 8 00:22:00.017162 kernel: HOME=/ Nov 8 00:22:00.017174 kernel: TERM=linux Nov 8 00:22:00.017188 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:22:00.017204 systemd[1]: Detected virtualization kvm. Nov 8 00:22:00.017217 systemd[1]: Detected architecture x86-64. Nov 8 00:22:00.017245 systemd[1]: Running in initrd. Nov 8 00:22:00.017262 systemd[1]: No hostname configured, using default hostname. Nov 8 00:22:00.017275 systemd[1]: Hostname set to . Nov 8 00:22:00.017288 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:22:00.017301 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:22:00.017314 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:22:00.017327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:22:00.017340 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:22:00.017351 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:22:00.017364 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:22:00.017389 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:22:00.017404 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:22:00.017415 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:22:00.017429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:22:00.017440 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:22:00.017450 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:22:00.017460 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:22:00.017474 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:22:00.017484 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:22:00.017495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:22:00.017506 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:22:00.017516 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:22:00.017530 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:22:00.017541 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:22:00.017552 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:22:00.017562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:22:00.017573 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:22:00.017583 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:22:00.017594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:22:00.017604 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:22:00.017618 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:22:00.017629 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:22:00.017639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:22:00.017650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:22:00.017660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:22:00.017671 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:22:00.017682 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:22:00.017718 systemd-journald[193]: Collecting audit messages is disabled. Nov 8 00:22:00.017743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:22:00.017757 systemd-journald[193]: Journal started Nov 8 00:22:00.017779 systemd-journald[193]: Runtime Journal (/run/log/journal/f439b5f8261d4ee99a8118d430b7958e) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:22:00.012591 systemd-modules-load[194]: Inserted module 'overlay' Nov 8 00:22:00.084955 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:22:00.084984 kernel: Bridge firewalling registered Nov 8 00:22:00.041887 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 8 00:22:00.104518 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:22:00.105198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:22:00.109273 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:22:00.113512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:22:00.133476 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:22:00.135114 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:22:00.136463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:22:00.140412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:22:00.156444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:22:00.159020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:22:00.162609 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:22:00.174391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:22:00.177752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:22:00.179857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:22:00.189077 dracut-cmdline[226]: dracut-dracut-053 Nov 8 00:22:00.192042 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:22:00.220029 systemd-resolved[230]: Positive Trust Anchors: Nov 8 00:22:00.220045 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:22:00.220083 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:22:00.223264 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 8 00:22:00.224539 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:22:00.235958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:22:00.275300 kernel: SCSI subsystem initialized Nov 8 00:22:00.285265 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:22:00.296267 kernel: iscsi: registered transport (tcp) Nov 8 00:22:00.317574 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:22:00.317601 kernel: QLogic iSCSI HBA Driver Nov 8 00:22:00.368901 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:22:00.376426 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:22:00.404083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:22:00.404123 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:22:00.405769 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:22:00.448259 kernel: raid6: avx2x4 gen() 24010 MB/s Nov 8 00:22:00.465258 kernel: raid6: avx2x2 gen() 30575 MB/s Nov 8 00:22:00.483063 kernel: raid6: avx2x1 gen() 25466 MB/s Nov 8 00:22:00.483106 kernel: raid6: using algorithm avx2x2 gen() 30575 MB/s Nov 8 00:22:00.501100 kernel: raid6: .... xor() 19606 MB/s, rmw enabled Nov 8 00:22:00.501125 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:22:00.525264 kernel: xor: automatically using best checksumming function avx Nov 8 00:22:00.687272 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:22:00.700378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:22:00.711394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:22:00.724254 systemd-udevd[414]: Using default interface naming scheme 'v255'. Nov 8 00:22:00.729444 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:22:00.741428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:22:00.756340 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Nov 8 00:22:00.792843 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:22:00.801382 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:22:00.874345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:22:00.889832 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:22:00.901150 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:22:00.906982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:22:00.912034 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:22:00.916822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:22:00.924915 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:22:00.928261 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:22:00.929921 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:22:00.934291 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:22:00.944893 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:22:00.944959 kernel: GPT:9289727 != 19775487 Nov 8 00:22:00.944985 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:22:00.945020 kernel: GPT:9289727 != 19775487 Nov 8 00:22:00.945046 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:22:00.945071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:22:00.944362 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:22:00.960550 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:22:00.960603 kernel: AES CTR mode by8 optimization enabled Nov 8 00:22:00.962508 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:22:00.962671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:22:00.970068 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:22:00.975435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:22:00.975899 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:22:00.985313 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:22:00.996450 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (472) Nov 8 00:22:00.996471 kernel: libata version 3.00 loaded. Nov 8 00:22:00.996482 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Nov 8 00:22:01.001528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:22:01.013717 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:22:01.013947 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:22:01.013961 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:22:01.014108 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:22:01.019347 kernel: scsi host0: ahci Nov 8 00:22:01.021260 kernel: scsi host1: ahci Nov 8 00:22:01.021601 kernel: scsi host2: ahci Nov 8 00:22:01.028332 kernel: scsi host3: ahci Nov 8 00:22:01.029249 kernel: scsi host4: ahci Nov 8 00:22:01.029546 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:22:01.103717 kernel: scsi host5: ahci Nov 8 00:22:01.103913 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 8 00:22:01.103926 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 8 00:22:01.103936 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 8 00:22:01.103953 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 8 00:22:01.103963 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 8 00:22:01.103974 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 8 00:22:01.103953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:22:01.112920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:22:01.121679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:22:01.129470 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:22:01.133823 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:22:01.150391 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:22:01.155561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:22:01.160602 disk-uuid[559]: Primary Header is updated. Nov 8 00:22:01.160602 disk-uuid[559]: Secondary Entries is updated. Nov 8 00:22:01.160602 disk-uuid[559]: Secondary Header is updated. Nov 8 00:22:01.164259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:22:01.183859 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:22:01.339674 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:22:01.339745 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:22:01.340267 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:22:01.341261 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:22:01.344271 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:22:01.344295 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:22:01.345267 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:22:01.346550 kernel: ata3.00: applying bridge limits Nov 8 00:22:01.347464 kernel: ata3.00: configured for UDMA/100 Nov 8 00:22:01.348269 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:22:01.396751 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:22:01.396989 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:22:01.409265 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:22:02.177259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:22:02.177578 disk-uuid[562]: The operation has completed successfully. Nov 8 00:22:02.209260 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:22:02.209428 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:22:02.229376 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:22:02.236594 sh[596]: Success Nov 8 00:22:02.251258 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:22:02.285630 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:22:02.301024 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:22:02.304096 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:22:02.323164 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:22:02.323199 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:22:02.323213 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:22:02.324811 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:22:02.326013 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:22:02.331114 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:22:02.334501 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:22:02.349382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:22:02.352983 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:22:02.363363 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:22:02.363399 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:22:02.363412 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:22:02.367704 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:22:02.377101 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:22:02.379961 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:22:02.390272 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:22:02.398390 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:22:02.455458 ignition[686]: Ignition 2.19.0 Nov 8 00:22:02.456311 ignition[686]: Stage: fetch-offline Nov 8 00:22:02.456397 ignition[686]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:22:02.456423 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:22:02.456587 ignition[686]: parsed url from cmdline: "" Nov 8 00:22:02.456596 ignition[686]: no config URL provided Nov 8 00:22:02.456607 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:22:02.456628 ignition[686]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:22:02.456687 ignition[686]: op(1): [started] loading QEMU firmware config module Nov 8 00:22:02.456700 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:22:02.471106 ignition[686]: op(1): [finished] loading QEMU firmware config module Nov 8 00:22:02.471135 ignition[686]: QEMU firmware config was not found. Ignoring... Nov 8 00:22:02.499365 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:22:02.511437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:22:02.536727 systemd-networkd[784]: lo: Link UP Nov 8 00:22:02.536735 systemd-networkd[784]: lo: Gained carrier Nov 8 00:22:02.538364 systemd-networkd[784]: Enumeration completed Nov 8 00:22:02.538753 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:22:02.538757 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:22:02.538921 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:22:02.540592 systemd-networkd[784]: eth0: Link UP Nov 8 00:22:02.540597 systemd-networkd[784]: eth0: Gained carrier Nov 8 00:22:02.540604 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:22:02.544936 systemd[1]: Reached target network.target - Network. Nov 8 00:22:02.565274 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:22:02.583520 ignition[686]: parsing config with SHA512: 6a50e927d31def726e2b8780b44135ec3297acdd758e97094a27e0f04174791b609d107109307fd733ebd1a60909775072a2e70046d1d72b5c5e43f0f2bb1db5 Nov 8 00:22:02.587311 unknown[686]: fetched base config from "system" Nov 8 00:22:02.588138 unknown[686]: fetched user config from "qemu" Nov 8 00:22:02.588888 ignition[686]: fetch-offline: fetch-offline passed Nov 8 00:22:02.589042 ignition[686]: Ignition finished successfully Nov 8 00:22:02.591465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:22:02.594145 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:22:02.601549 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:22:02.619809 ignition[788]: Ignition 2.19.0 Nov 8 00:22:02.619821 ignition[788]: Stage: kargs Nov 8 00:22:02.620021 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:22:02.620035 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:22:02.624595 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:22:02.621046 ignition[788]: kargs: kargs passed Nov 8 00:22:02.621102 ignition[788]: Ignition finished successfully Nov 8 00:22:02.637412 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:22:02.652481 ignition[795]: Ignition 2.19.0 Nov 8 00:22:02.652492 ignition[795]: Stage: disks Nov 8 00:22:02.652655 ignition[795]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:22:02.652667 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:22:02.655971 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:22:02.653602 ignition[795]: disks: disks passed Nov 8 00:22:02.658216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:22:02.653644 ignition[795]: Ignition finished successfully Nov 8 00:22:02.661588 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:22:02.662227 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:22:02.662798 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:22:02.663077 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:22:02.682367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:22:02.697412 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:22:02.704030 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:22:02.723355 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:22:02.807265 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:22:02.808079 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:22:02.809320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:22:02.821327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:22:02.822802 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:22:02.826350 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:22:02.839759 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Nov 8 00:22:02.839806 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:22:02.839817 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:22:02.839828 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:22:02.839839 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:22:02.826394 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:22:02.826420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:22:02.841143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:22:02.845085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:22:02.860361 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:22:02.891322 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:22:02.897430 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:22:02.902815 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:22:02.907105 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:22:02.993620 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:22:03.007322 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:22:03.010957 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:22:03.016255 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:22:03.034416 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:22:03.040021 ignition[929]: INFO : Ignition 2.19.0 Nov 8 00:22:03.040021 ignition[929]: INFO : Stage: mount Nov 8 00:22:03.042559 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:22:03.042559 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:22:03.047080 ignition[929]: INFO : mount: mount passed Nov 8 00:22:03.048332 ignition[929]: INFO : Ignition finished successfully Nov 8 00:22:03.051737 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:22:03.060403 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:22:03.321517 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:22:03.330500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:22:03.342428 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Nov 8 00:22:03.342468 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:22:03.342493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:22:03.343964 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:22:03.348257 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:22:03.349471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:22:03.386834 ignition[960]: INFO : Ignition 2.19.0 Nov 8 00:22:03.386834 ignition[960]: INFO : Stage: files Nov 8 00:22:03.389511 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:22:03.389511 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:22:03.389511 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:22:03.395656 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:22:03.395656 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:22:03.400673 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:22:03.400673 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:22:03.400673 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:22:03.400673 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:22:03.399148 unknown[960]: wrote ssh authorized keys file for user: core Nov 8 00:22:03.412184 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:22:03.449515 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:22:03.544629 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:22:03.547995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:22:03.547995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 8 00:22:03.630361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:22:03.706511 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:22:03.706511 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:22:03.712579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:22:03.899418 systemd-networkd[784]: eth0: Gained IPv6LL Nov 8 00:22:04.138215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:22:04.532735 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:22:04.532735 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:22:04.539201 ignition[960]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:22:04.576015 ignition[960]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:22:04.582055 ignition[960]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:22:04.584781 ignition[960]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:22:04.584781 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:22:04.584781 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:22:04.584781 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:22:04.584781 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:22:04.584781 ignition[960]: INFO : files: files passed Nov 8 00:22:04.584781 ignition[960]: INFO : Ignition finished successfully Nov 8 00:22:04.585649 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:22:04.601993 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:22:04.607968 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:22:04.612176 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:22:04.613753 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:22:04.618420 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:22:04.621707 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:22:04.621707 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:22:04.627821 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:22:04.632109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:22:04.636496 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:22:04.651386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:22:04.680505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:22:04.682331 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:22:04.686579 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:22:04.690221 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:22:04.693765 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:22:04.707441 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:22:04.723608 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:22:04.737504 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:22:04.747866 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:22:04.751584 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:22:04.755403 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:22:04.758417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:22:04.760122 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:22:04.764310 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:22:04.767754 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:22:04.770808 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:22:04.774426 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:22:04.778266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:22:04.781941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:22:04.785392 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:22:04.789436 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:22:04.792874 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:22:04.796418 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:22:04.799131 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:22:04.800740 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:22:04.804535 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:22:04.808192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:22:04.812135 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:22:04.813756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:22:04.818014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:22:04.819712 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:22:04.823394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:22:04.825158 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:22:04.829110 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:22:04.832058 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:22:04.835284 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:22:04.839860 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:22:04.842908 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:22:04.845983 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:22:04.847439 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:22:04.850645 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:22:04.852089 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:22:04.855456 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:22:04.857363 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:22:04.861575 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:22:04.863133 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:22:04.882472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:22:04.886600 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:22:04.889526 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:22:04.891215 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:22:04.895043 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:22:04.896722 ignition[1014]: INFO : Ignition 2.19.0 Nov 8 00:22:04.896722 ignition[1014]: INFO : Stage: umount Nov 8 00:22:04.896722 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:22:04.896722 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:22:04.896722 ignition[1014]: INFO : umount: umount passed Nov 8 00:22:04.896722 ignition[1014]: INFO : Ignition finished successfully Nov 8 00:22:04.896784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:22:04.912898 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:22:04.914735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:22:04.921455 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:22:04.923140 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:22:04.928839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:22:04.931351 systemd[1]: Stopped target network.target - Network. Nov 8 00:22:04.934357 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:22:04.935961 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:22:04.939310 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:22:04.939374 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:22:04.944016 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:22:04.944077 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:22:04.948882 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:22:04.948952 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:22:04.954551 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:22:04.958145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:22:04.964314 systemd-networkd[784]: eth0: DHCPv6 lease lost Nov 8 00:22:04.968314 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:22:04.968501 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:22:04.972284 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:22:04.972411 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:22:04.977024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:22:04.977094 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:22:04.993337 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:22:04.993978 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:22:04.994036 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:22:04.996970 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:22:04.997025 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:22:05.000802 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:22:05.000854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:22:05.004170 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:22:05.004222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:22:05.004797 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:22:05.022428 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:22:05.022597 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:22:05.033262 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:22:05.033459 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:22:05.034577 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:22:05.034629 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:22:05.035117 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:22:05.035157 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:22:05.035637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:22:05.035684 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:22:05.047926 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:22:05.047979 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:22:05.052571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:22:05.052627 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:22:05.067473 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:22:05.068127 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:22:05.068207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:22:05.071641 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:22:05.071705 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:22:05.072161 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:22:05.072215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:22:05.080200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:22:05.080281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:22:05.094808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:22:05.094959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:22:05.144710 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:22:05.144882 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:22:05.146316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:22:05.149605 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:22:05.149667 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:22:05.167491 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:22:05.175585 systemd[1]: Switching root. Nov 8 00:22:05.208474 systemd-journald[193]: Journal stopped Nov 8 00:22:06.553668 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 8 00:22:06.553744 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:22:06.553777 kernel: SELinux: policy capability open_perms=1 Nov 8 00:22:06.553791 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:22:06.553803 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:22:06.553814 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:22:06.553826 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:22:06.553837 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:22:06.553854 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:22:06.553868 kernel: audit: type=1403 audit(1762561325.701:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:22:06.553887 systemd[1]: Successfully loaded SELinux policy in 44.827ms. Nov 8 00:22:06.553912 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.135ms. Nov 8 00:22:06.553925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:22:06.553937 systemd[1]: Detected virtualization kvm. Nov 8 00:22:06.553950 systemd[1]: Detected architecture x86-64. Nov 8 00:22:06.553968 systemd[1]: Detected first boot. Nov 8 00:22:06.553980 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:22:06.553993 zram_generator::config[1058]: No configuration found. Nov 8 00:22:06.554008 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:22:06.554021 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:22:06.554033 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:22:06.554046 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:22:06.554058 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:22:06.554071 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:22:06.554084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:22:06.554096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:22:06.554111 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:22:06.554124 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:22:06.554136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:22:06.554148 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:22:06.554160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:22:06.554172 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:22:06.554184 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:22:06.554197 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:22:06.554209 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:22:06.554224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:22:06.554248 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:22:06.554261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:22:06.554274 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:22:06.554286 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:22:06.554298 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:22:06.554310 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:22:06.554326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:22:06.554343 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:22:06.554356 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:22:06.554369 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:22:06.554381 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:22:06.554393 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:22:06.554405 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:22:06.554418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:22:06.554430 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:22:06.554441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:22:06.554456 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:22:06.554469 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:22:06.554481 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:22:06.554493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:06.554505 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:22:06.554517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:22:06.554529 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:22:06.554543 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:22:06.554560 systemd[1]: Reached target machines.target - Containers. Nov 8 00:22:06.554573 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:22:06.554587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:22:06.554602 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:22:06.554614 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:22:06.554628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:22:06.554641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:22:06.554653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:22:06.554665 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:22:06.554680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:22:06.554693 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:22:06.554705 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:22:06.554717 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:22:06.554729 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:22:06.554742 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:22:06.554753 kernel: fuse: init (API version 7.39) Nov 8 00:22:06.554772 kernel: loop: module loaded Nov 8 00:22:06.554787 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:22:06.554800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:22:06.554813 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:22:06.554826 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:22:06.554858 systemd-journald[1135]: Collecting audit messages is disabled. Nov 8 00:22:06.554880 systemd-journald[1135]: Journal started Nov 8 00:22:06.554904 systemd-journald[1135]: Runtime Journal (/run/log/journal/f439b5f8261d4ee99a8118d430b7958e) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:22:06.259385 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:22:06.282290 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:22:06.282778 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:22:06.558615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:22:06.564210 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:22:06.564260 systemd[1]: Stopped verity-setup.service. Nov 8 00:22:06.568281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:06.571263 kernel: ACPI: bus type drm_connector registered Nov 8 00:22:06.571294 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:22:06.574388 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:22:06.576274 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:22:06.578332 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:22:06.580081 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:22:06.582099 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:22:06.584131 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:22:06.586032 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:22:06.588390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:22:06.590851 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:22:06.591040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:22:06.593479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:22:06.593698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:22:06.595923 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:22:06.596104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:22:06.598336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:22:06.598525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:22:06.600864 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:22:06.601042 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:22:06.603322 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:22:06.603502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:22:06.605629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:22:06.607974 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:22:06.610380 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:22:06.626680 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:22:06.641448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:22:06.645101 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:22:06.646931 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:22:06.646974 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:22:06.649931 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:22:06.653416 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:22:06.658151 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:22:06.660083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:22:06.662898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:22:06.665732 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:22:06.667693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:22:06.671655 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:22:06.673748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:22:06.675297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:22:06.683211 systemd-journald[1135]: Time spent on flushing to /var/log/journal/f439b5f8261d4ee99a8118d430b7958e is 23.561ms for 952 entries. Nov 8 00:22:06.683211 systemd-journald[1135]: System Journal (/var/log/journal/f439b5f8261d4ee99a8118d430b7958e) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:22:06.722166 systemd-journald[1135]: Received client request to flush runtime journal. Nov 8 00:22:06.722214 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:22:06.685626 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:22:06.689803 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:22:06.695837 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:22:06.698302 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:22:06.700501 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:22:06.710491 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:22:06.712895 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:22:06.719570 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:22:06.732425 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:22:06.738393 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:22:06.743287 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:22:06.741447 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Nov 8 00:22:06.741468 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Nov 8 00:22:06.743696 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:22:06.747649 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:22:06.750146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:22:06.768906 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:22:06.773771 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:22:06.774988 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:22:06.778256 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:22:06.782504 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:22:06.809363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:22:06.816618 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 00:22:06.818622 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:22:06.844083 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Nov 8 00:22:06.844107 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Nov 8 00:22:06.849901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:22:06.861267 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:22:06.873293 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:22:06.888327 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:22:06.894366 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:22:06.895120 (sd-merge)[1200]: Merged extensions into '/usr'. Nov 8 00:22:06.901105 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:22:06.901211 systemd[1]: Reloading... Nov 8 00:22:06.981773 zram_generator::config[1226]: No configuration found. Nov 8 00:22:07.042946 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:22:07.107921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:22:07.157347 systemd[1]: Reloading finished in 255 ms. Nov 8 00:22:07.189843 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:22:07.192075 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:22:07.210375 systemd[1]: Starting ensure-sysext.service... Nov 8 00:22:07.213328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:22:07.218632 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:22:07.218642 systemd[1]: Reloading... Nov 8 00:22:07.243605 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:22:07.244056 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:22:07.245206 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:22:07.245662 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 8 00:22:07.245751 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 8 00:22:07.249414 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:22:07.249426 systemd-tmpfiles[1264]: Skipping /boot Nov 8 00:22:07.266723 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:22:07.266812 systemd-tmpfiles[1264]: Skipping /boot Nov 8 00:22:07.284318 zram_generator::config[1290]: No configuration found. Nov 8 00:22:07.404979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:22:07.456296 systemd[1]: Reloading finished in 237 ms. Nov 8 00:22:07.475070 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:22:07.488138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:22:07.498580 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:22:07.501942 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:22:07.505606 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:22:07.509956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:22:07.514522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:22:07.518592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:22:07.524880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:07.525064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:22:07.528425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:22:07.533505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:22:07.537547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:22:07.539463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:22:07.543487 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:22:07.545266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:07.546370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:22:07.546572 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:22:07.549002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:22:07.549181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:22:07.551884 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:22:07.552061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:22:07.560040 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:22:07.573522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:07.573733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:22:07.574774 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Nov 8 00:22:07.578630 augenrules[1359]: No rules Nov 8 00:22:07.579471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:22:07.585523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:22:07.590508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:22:07.594300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:22:07.595810 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:22:07.598336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:07.599738 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:22:07.602373 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:22:07.611633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:22:07.611847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:22:07.614792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:22:07.619394 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:22:07.622721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:22:07.622949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:22:07.627736 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:22:07.628024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:22:07.632557 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:22:07.642327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:22:07.661522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:07.661735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:22:07.669469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:22:07.680474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:22:07.686013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:22:07.690653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:22:07.694546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:22:07.704402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:22:07.709288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1381) Nov 8 00:22:07.710357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:22:07.710389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:22:07.711273 systemd[1]: Finished ensure-sysext.service. Nov 8 00:22:07.713022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:22:07.713206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:22:07.715877 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:22:07.716118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:22:07.718305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:22:07.718589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:22:07.721099 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:22:07.721351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:22:07.730789 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:22:07.740638 systemd-resolved[1334]: Positive Trust Anchors: Nov 8 00:22:07.740663 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:22:07.740696 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:22:07.744631 systemd-resolved[1334]: Defaulting to hostname 'linux'. Nov 8 00:22:07.747736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:22:07.751719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:22:07.754391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:22:07.754454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:22:07.765420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:22:07.773200 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:22:07.771164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:22:07.783478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:22:07.791284 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:22:07.802060 systemd-networkd[1405]: lo: Link UP Nov 8 00:22:07.802073 systemd-networkd[1405]: lo: Gained carrier Nov 8 00:22:07.818258 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:22:07.819849 systemd-networkd[1405]: Enumeration completed Nov 8 00:22:07.819971 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:22:07.821928 systemd[1]: Reached target network.target - Network. Nov 8 00:22:07.822337 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:22:07.822342 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:22:07.823217 systemd-networkd[1405]: eth0: Link UP Nov 8 00:22:07.823222 systemd-networkd[1405]: eth0: Gained carrier Nov 8 00:22:07.823246 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:22:07.833414 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:22:07.836321 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:22:07.836661 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:22:07.856515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:22:07.908955 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:22:07.916050 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:22:07.916381 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:22:07.916572 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:22:07.916716 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:22:07.913439 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:22:07.917051 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:22:07.917412 systemd-timesyncd[1415]: Initial clock synchronization to Sat 2025-11-08 00:22:08.058373 UTC. Nov 8 00:22:07.927920 kernel: kvm_amd: TSC scaling supported Nov 8 00:22:07.927950 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:22:07.927973 kernel: kvm_amd: Nested Paging enabled Nov 8 00:22:07.927986 kernel: kvm_amd: LBR virtualization supported Nov 8 00:22:07.929851 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:22:07.929874 kernel: kvm_amd: Virtual GIF supported Nov 8 00:22:07.952279 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:22:07.982893 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:22:08.017569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:22:08.029441 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:22:08.039972 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:22:08.073633 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:22:08.075911 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:22:08.077714 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:22:08.079541 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:22:08.081596 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:22:08.084023 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:22:08.085888 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:22:08.087951 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:22:08.090229 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:22:08.090272 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:22:08.091991 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:22:08.094645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:22:08.098345 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:22:08.110375 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:22:08.113889 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:22:08.116296 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:22:08.118164 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:22:08.118831 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:22:08.121280 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:22:08.121307 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:22:08.122417 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:22:08.125288 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:22:08.128343 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:22:08.130374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:22:08.135505 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:22:08.138286 jq[1441]: false Nov 8 00:22:08.137708 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:22:08.138954 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:22:08.152595 dbus-daemon[1440]: [system] SELinux support is enabled Nov 8 00:22:08.154397 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:22:08.157333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:22:08.160379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:22:08.167788 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:22:08.170090 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:22:08.170668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:22:08.181528 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:22:08.181722 extend-filesystems[1442]: Found loop3 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found loop4 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found loop5 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found sr0 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda1 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda2 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda3 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found usr Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda4 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda6 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda7 Nov 8 00:22:08.188368 extend-filesystems[1442]: Found vda9 Nov 8 00:22:08.188368 extend-filesystems[1442]: Checking size of /dev/vda9 Nov 8 00:22:08.185547 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:22:08.188734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:22:08.193200 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:22:08.210090 update_engine[1453]: I20251108 00:22:08.200006 1453 main.cc:92] Flatcar Update Engine starting Nov 8 00:22:08.210090 update_engine[1453]: I20251108 00:22:08.204943 1453 update_check_scheduler.cc:74] Next update check in 2m51s Nov 8 00:22:08.210437 jq[1459]: true Nov 8 00:22:08.210774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:22:08.211000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:22:08.211432 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:22:08.211645 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:22:08.214726 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:22:08.216344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:22:08.230364 extend-filesystems[1442]: Resized partition /dev/vda9 Nov 8 00:22:08.235653 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:22:08.240561 jq[1464]: true Nov 8 00:22:08.244160 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:22:08.251963 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:22:08.250700 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:22:08.252750 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:22:08.252843 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:22:08.259975 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1375) Nov 8 00:22:08.256487 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:22:08.256518 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:22:08.270536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:22:08.274295 tar[1462]: linux-amd64/LICENSE Nov 8 00:22:08.274666 tar[1462]: linux-amd64/helm Nov 8 00:22:08.286718 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:22:08.286757 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:22:08.290994 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:22:08.295022 systemd-logind[1449]: New seat seat0. Nov 8 00:22:08.299074 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:22:08.315243 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:22:08.315243 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:22:08.315243 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:22:08.332137 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Nov 8 00:22:08.316604 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:22:08.316846 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:22:08.335116 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:22:08.339856 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:22:08.342981 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:22:08.346095 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:22:08.472496 containerd[1465]: time="2025-11-08T00:22:08.472299417Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:22:08.496351 containerd[1465]: time="2025-11-08T00:22:08.496307033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.498216 containerd[1465]: time="2025-11-08T00:22:08.498171587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:22:08.498216 containerd[1465]: time="2025-11-08T00:22:08.498202440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:22:08.498216 containerd[1465]: time="2025-11-08T00:22:08.498218579Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499516303Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499576989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499661105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499675654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499912036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499928471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499942206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.499955531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.500062945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.500341884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:22:08.500806 containerd[1465]: time="2025-11-08T00:22:08.500480742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:22:08.501044 containerd[1465]: time="2025-11-08T00:22:08.500494650Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:22:08.501044 containerd[1465]: time="2025-11-08T00:22:08.500597383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:22:08.501044 containerd[1465]: time="2025-11-08T00:22:08.500656560Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:22:08.507117 containerd[1465]: time="2025-11-08T00:22:08.506669812Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:22:08.507117 containerd[1465]: time="2025-11-08T00:22:08.506716560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:22:08.507117 containerd[1465]: time="2025-11-08T00:22:08.506733537Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:22:08.507117 containerd[1465]: time="2025-11-08T00:22:08.506749085Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:22:08.507117 containerd[1465]: time="2025-11-08T00:22:08.506763615Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:22:08.507117 containerd[1465]: time="2025-11-08T00:22:08.506900474Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:22:08.507286 containerd[1465]: time="2025-11-08T00:22:08.507250621Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:22:08.507463 containerd[1465]: time="2025-11-08T00:22:08.507437951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:22:08.507500 containerd[1465]: time="2025-11-08T00:22:08.507468232Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:22:08.507500 containerd[1465]: time="2025-11-08T00:22:08.507486676Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:22:08.507552 containerd[1465]: time="2025-11-08T00:22:08.507507292Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507552 containerd[1465]: time="2025-11-08T00:22:08.507526706Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507552 containerd[1465]: time="2025-11-08T00:22:08.507543315Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507616 containerd[1465]: time="2025-11-08T00:22:08.507561453Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507616 containerd[1465]: time="2025-11-08T00:22:08.507593958Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507653 containerd[1465]: time="2025-11-08T00:22:08.507619193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507653 containerd[1465]: time="2025-11-08T00:22:08.507636159Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507696 containerd[1465]: time="2025-11-08T00:22:08.507650892Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:22:08.507696 containerd[1465]: time="2025-11-08T00:22:08.507682805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507733 containerd[1465]: time="2025-11-08T00:22:08.507703023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507733 containerd[1465]: time="2025-11-08T00:22:08.507718918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507781 containerd[1465]: time="2025-11-08T00:22:08.507735049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507781 containerd[1465]: time="2025-11-08T00:22:08.507750944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507781 containerd[1465]: time="2025-11-08T00:22:08.507766901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507835 containerd[1465]: time="2025-11-08T00:22:08.507781338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507835 containerd[1465]: time="2025-11-08T00:22:08.507802443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507835 containerd[1465]: time="2025-11-08T00:22:08.507820266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507899 containerd[1465]: time="2025-11-08T00:22:08.507839740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507899 containerd[1465]: time="2025-11-08T00:22:08.507858867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507899 containerd[1465]: time="2025-11-08T00:22:08.507873703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507899 containerd[1465]: time="2025-11-08T00:22:08.507889690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507975 containerd[1465]: time="2025-11-08T00:22:08.507909786Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:22:08.507975 containerd[1465]: time="2025-11-08T00:22:08.507939140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507975 containerd[1465]: time="2025-11-08T00:22:08.507957452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.507975 containerd[1465]: time="2025-11-08T00:22:08.507972440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:22:08.508052 containerd[1465]: time="2025-11-08T00:22:08.508037602Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:22:08.508073 containerd[1465]: time="2025-11-08T00:22:08.508059370Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:22:08.508096 containerd[1465]: time="2025-11-08T00:22:08.508073287Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:22:08.508117 containerd[1465]: time="2025-11-08T00:22:08.508098736Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:22:08.508117 containerd[1465]: time="2025-11-08T00:22:08.508112337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.508117 containerd[1465]: time="2025-11-08T00:22:08.508127193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:22:08.508194 containerd[1465]: time="2025-11-08T00:22:08.508146881Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:22:08.508194 containerd[1465]: time="2025-11-08T00:22:08.508161614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:22:08.508870 containerd[1465]: time="2025-11-08T00:22:08.508787041Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:22:08.508870 containerd[1465]: time="2025-11-08T00:22:08.508863785Z" level=info msg="Connect containerd service" Nov 8 00:22:08.509045 containerd[1465]: time="2025-11-08T00:22:08.508907760Z" level=info msg="using legacy CRI server" Nov 8 00:22:08.509045 containerd[1465]: time="2025-11-08T00:22:08.508917181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:22:08.509147 containerd[1465]: time="2025-11-08T00:22:08.509116796Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:22:08.509922 containerd[1465]: time="2025-11-08T00:22:08.509884934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:22:08.510312 containerd[1465]: time="2025-11-08T00:22:08.510277232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:22:08.510363 containerd[1465]: time="2025-11-08T00:22:08.510342077Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:22:08.510419 containerd[1465]: time="2025-11-08T00:22:08.510381464Z" level=info msg="Start subscribing containerd event" Nov 8 00:22:08.510443 containerd[1465]: time="2025-11-08T00:22:08.510422635Z" level=info msg="Start recovering state" Nov 8 00:22:08.510511 containerd[1465]: time="2025-11-08T00:22:08.510489225Z" level=info msg="Start event monitor" Nov 8 00:22:08.510511 containerd[1465]: time="2025-11-08T00:22:08.510505864Z" level=info msg="Start snapshots syncer" Nov 8 00:22:08.510550 containerd[1465]: time="2025-11-08T00:22:08.510516366Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:22:08.510550 containerd[1465]: time="2025-11-08T00:22:08.510527959Z" level=info msg="Start streaming server" Nov 8 00:22:08.510759 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:22:08.516335 containerd[1465]: time="2025-11-08T00:22:08.516296775Z" level=info msg="containerd successfully booted in 0.045111s" Nov 8 00:22:08.527932 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:22:08.554238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:22:08.565920 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:22:08.575512 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:22:08.575821 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:22:08.583507 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:22:08.596751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:22:08.606552 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:22:08.609417 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:22:08.611400 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:22:08.736356 tar[1462]: linux-amd64/README.md Nov 8 00:22:08.752295 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:22:09.022833 systemd-networkd[1405]: eth0: Gained IPv6LL Nov 8 00:22:09.025898 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:22:09.028449 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:22:09.041462 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:22:09.044609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:09.047491 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:22:09.068507 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:22:09.068763 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:22:09.071201 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:22:09.074347 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:22:09.776876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:09.779713 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:22:09.782098 systemd[1]: Startup finished in 1.082s (kernel) + 5.948s (initrd) + 4.124s (userspace) = 11.155s. Nov 8 00:22:09.782678 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:22:10.189280 kubelet[1554]: E1108 00:22:10.189122 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:22:10.193465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:22:10.193690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:22:10.194040 systemd[1]: kubelet.service: Consumed 1.006s CPU time. Nov 8 00:22:13.633543 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:22:13.635217 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:57498.service - OpenSSH per-connection server daemon (10.0.0.1:57498). Nov 8 00:22:13.888900 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 57498 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:13.891031 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:13.899821 systemd-logind[1449]: New session 1 of user core. Nov 8 00:22:13.901172 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:22:13.914495 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:22:13.926483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:22:13.937529 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:22:13.940825 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:22:14.046778 systemd[1572]: Queued start job for default target default.target. Nov 8 00:22:14.057649 systemd[1572]: Created slice app.slice - User Application Slice. Nov 8 00:22:14.057679 systemd[1572]: Reached target paths.target - Paths. Nov 8 00:22:14.057693 systemd[1572]: Reached target timers.target - Timers. Nov 8 00:22:14.059316 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:22:14.070761 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:22:14.070899 systemd[1572]: Reached target sockets.target - Sockets. Nov 8 00:22:14.070918 systemd[1572]: Reached target basic.target - Basic System. Nov 8 00:22:14.070956 systemd[1572]: Reached target default.target - Main User Target. Nov 8 00:22:14.070996 systemd[1572]: Startup finished in 123ms. Nov 8 00:22:14.071735 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:22:14.073738 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:22:14.140521 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:57506.service - OpenSSH per-connection server daemon (10.0.0.1:57506). Nov 8 00:22:14.182536 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 57506 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:14.184184 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.188426 systemd-logind[1449]: New session 2 of user core. Nov 8 00:22:14.198378 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:22:14.253083 sshd[1583]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.262884 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:57506.service: Deactivated successfully. Nov 8 00:22:14.264747 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:22:14.266173 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:22:14.277505 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:57508.service - OpenSSH per-connection server daemon (10.0.0.1:57508). Nov 8 00:22:14.278418 systemd-logind[1449]: Removed session 2. Nov 8 00:22:14.310741 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 57508 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:14.312361 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.316149 systemd-logind[1449]: New session 3 of user core. Nov 8 00:22:14.329415 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:22:14.380617 sshd[1590]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.388119 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:57508.service: Deactivated successfully. Nov 8 00:22:14.389848 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:22:14.391644 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:22:14.401528 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:57512.service - OpenSSH per-connection server daemon (10.0.0.1:57512). Nov 8 00:22:14.402549 systemd-logind[1449]: Removed session 3. Nov 8 00:22:14.433790 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 57512 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:14.435392 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.439275 systemd-logind[1449]: New session 4 of user core. Nov 8 00:22:14.449390 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:22:14.504059 sshd[1597]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.518828 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:57512.service: Deactivated successfully. Nov 8 00:22:14.520552 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:22:14.522388 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:22:14.532545 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:57518.service - OpenSSH per-connection server daemon (10.0.0.1:57518). Nov 8 00:22:14.533661 systemd-logind[1449]: Removed session 4. Nov 8 00:22:14.566121 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 57518 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:14.567792 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.572393 systemd-logind[1449]: New session 5 of user core. Nov 8 00:22:14.580589 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:22:14.643161 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:22:14.643627 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:22:14.663930 sudo[1607]: pam_unix(sudo:session): session closed for user root Nov 8 00:22:14.666421 sshd[1604]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.688281 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:57518.service: Deactivated successfully. Nov 8 00:22:14.690321 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:22:14.692170 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:22:14.693627 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:57522.service - OpenSSH per-connection server daemon (10.0.0.1:57522). Nov 8 00:22:14.694471 systemd-logind[1449]: Removed session 5. Nov 8 00:22:14.748337 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 57522 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:14.750060 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.754051 systemd-logind[1449]: New session 6 of user core. Nov 8 00:22:14.764425 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:22:14.820604 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:22:14.820962 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:22:14.825590 sudo[1616]: pam_unix(sudo:session): session closed for user root Nov 8 00:22:14.833458 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:22:14.833848 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:22:14.854531 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:22:14.856798 auditctl[1619]: No rules Nov 8 00:22:14.858274 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:22:14.858596 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:22:14.860590 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:22:14.902534 augenrules[1637]: No rules Nov 8 00:22:14.905089 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:22:14.906857 sudo[1615]: pam_unix(sudo:session): session closed for user root Nov 8 00:22:14.909398 sshd[1612]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.923003 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:57522.service: Deactivated successfully. Nov 8 00:22:14.925633 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:22:14.927528 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:22:14.939784 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:57524.service - OpenSSH per-connection server daemon (10.0.0.1:57524). Nov 8 00:22:14.940983 systemd-logind[1449]: Removed session 6. Nov 8 00:22:14.977892 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 57524 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:22:14.980363 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.986559 systemd-logind[1449]: New session 7 of user core. Nov 8 00:22:14.996640 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:22:15.059074 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:22:15.060890 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:22:16.148525 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:22:16.148732 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:22:16.572939 dockerd[1667]: time="2025-11-08T00:22:16.572765587Z" level=info msg="Starting up" Nov 8 00:22:17.039692 dockerd[1667]: time="2025-11-08T00:22:17.039538577Z" level=info msg="Loading containers: start." Nov 8 00:22:17.163283 kernel: Initializing XFRM netlink socket Nov 8 00:22:17.241351 systemd-networkd[1405]: docker0: Link UP Nov 8 00:22:17.265769 dockerd[1667]: time="2025-11-08T00:22:17.265735819Z" level=info msg="Loading containers: done." Nov 8 00:22:17.281914 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3041214419-merged.mount: Deactivated successfully. Nov 8 00:22:17.284945 dockerd[1667]: time="2025-11-08T00:22:17.284897114Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:22:17.285022 dockerd[1667]: time="2025-11-08T00:22:17.284997291Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:22:17.285146 dockerd[1667]: time="2025-11-08T00:22:17.285122465Z" level=info msg="Daemon has completed initialization" Nov 8 00:22:17.323010 dockerd[1667]: time="2025-11-08T00:22:17.322849998Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:22:17.323094 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:22:18.071450 containerd[1465]: time="2025-11-08T00:22:18.071392666Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:22:18.785044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772116414.mount: Deactivated successfully. Nov 8 00:22:19.764721 containerd[1465]: time="2025-11-08T00:22:19.764645926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:19.765675 containerd[1465]: time="2025-11-08T00:22:19.765582933Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:22:19.766867 containerd[1465]: time="2025-11-08T00:22:19.766836837Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:19.769631 containerd[1465]: time="2025-11-08T00:22:19.769575632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:19.770747 containerd[1465]: time="2025-11-08T00:22:19.770707744Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.699269066s" Nov 8 00:22:19.770747 containerd[1465]: time="2025-11-08T00:22:19.770743084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:22:19.771304 containerd[1465]: time="2025-11-08T00:22:19.771277780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:22:20.444041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:22:20.488481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:20.693072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:20.697560 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:22:21.090907 kubelet[1881]: E1108 00:22:21.090681 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:22:21.099562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:22:21.099890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:22:22.333328 containerd[1465]: time="2025-11-08T00:22:22.333257420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:22.349312 containerd[1465]: time="2025-11-08T00:22:22.349258277Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:22:22.406003 containerd[1465]: time="2025-11-08T00:22:22.405964824Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:22.481502 containerd[1465]: time="2025-11-08T00:22:22.481431358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:22.482989 containerd[1465]: time="2025-11-08T00:22:22.482947918Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.711566287s" Nov 8 00:22:22.482989 containerd[1465]: time="2025-11-08T00:22:22.482984335Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:22:22.483744 containerd[1465]: time="2025-11-08T00:22:22.483720181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:22:24.953342 containerd[1465]: time="2025-11-08T00:22:24.953264915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:24.954047 containerd[1465]: time="2025-11-08T00:22:24.953973357Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:22:24.955147 containerd[1465]: time="2025-11-08T00:22:24.955113836Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:24.958084 containerd[1465]: time="2025-11-08T00:22:24.958054313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:24.959156 containerd[1465]: time="2025-11-08T00:22:24.959128380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.475380276s" Nov 8 00:22:24.959189 containerd[1465]: time="2025-11-08T00:22:24.959160267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:22:24.959709 containerd[1465]: time="2025-11-08T00:22:24.959637644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:22:26.594533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193679846.mount: Deactivated successfully. Nov 8 00:22:27.567946 containerd[1465]: time="2025-11-08T00:22:27.567860135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:27.568833 containerd[1465]: time="2025-11-08T00:22:27.568787035Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:22:27.570472 containerd[1465]: time="2025-11-08T00:22:27.570435383Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:27.573176 containerd[1465]: time="2025-11-08T00:22:27.573099542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:27.573701 containerd[1465]: time="2025-11-08T00:22:27.573656633Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.613938087s" Nov 8 00:22:27.573701 containerd[1465]: time="2025-11-08T00:22:27.573689591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:22:27.574297 containerd[1465]: time="2025-11-08T00:22:27.574260638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:22:28.159624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115427480.mount: Deactivated successfully. Nov 8 00:22:29.630981 containerd[1465]: time="2025-11-08T00:22:29.630884113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:29.632170 containerd[1465]: time="2025-11-08T00:22:29.632089751Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:22:29.634128 containerd[1465]: time="2025-11-08T00:22:29.634015879Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:29.638512 containerd[1465]: time="2025-11-08T00:22:29.638449235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:29.639601 containerd[1465]: time="2025-11-08T00:22:29.639570574Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.065278648s" Nov 8 00:22:29.639601 containerd[1465]: time="2025-11-08T00:22:29.639603512Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:22:29.640052 containerd[1465]: time="2025-11-08T00:22:29.640014842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:22:30.084579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883111692.mount: Deactivated successfully. Nov 8 00:22:30.090851 containerd[1465]: time="2025-11-08T00:22:30.090783653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:30.091596 containerd[1465]: time="2025-11-08T00:22:30.091522468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:22:30.092951 containerd[1465]: time="2025-11-08T00:22:30.092913687Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:30.095805 containerd[1465]: time="2025-11-08T00:22:30.095751299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:30.096657 containerd[1465]: time="2025-11-08T00:22:30.096614805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 456.567919ms" Nov 8 00:22:30.096657 containerd[1465]: time="2025-11-08T00:22:30.096650565Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:22:30.097162 containerd[1465]: time="2025-11-08T00:22:30.097137574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:22:30.691297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352390646.mount: Deactivated successfully. Nov 8 00:22:31.350299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:22:31.359418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:31.561327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:31.566997 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:22:31.626500 kubelet[2018]: E1108 00:22:31.626295 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:22:31.630818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:22:31.631057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:22:33.085156 containerd[1465]: time="2025-11-08T00:22:33.085059655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:33.086273 containerd[1465]: time="2025-11-08T00:22:33.086167285Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:22:33.087817 containerd[1465]: time="2025-11-08T00:22:33.087783969Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:33.091230 containerd[1465]: time="2025-11-08T00:22:33.091176204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:33.093060 containerd[1465]: time="2025-11-08T00:22:33.093017720Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.995849845s" Nov 8 00:22:33.093109 containerd[1465]: time="2025-11-08T00:22:33.093063424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:22:35.845030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:35.858455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:35.884849 systemd[1]: Reloading requested from client PID 2063 ('systemctl') (unit session-7.scope)... Nov 8 00:22:35.884863 systemd[1]: Reloading... Nov 8 00:22:35.961268 zram_generator::config[2103]: No configuration found. Nov 8 00:22:36.189446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:22:36.272802 systemd[1]: Reloading finished in 387 ms. Nov 8 00:22:36.333472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:36.337083 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:22:36.337367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:36.339681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:36.522196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:36.526645 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:22:36.571446 kubelet[2152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:22:36.571446 kubelet[2152]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:22:36.571446 kubelet[2152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:22:36.571902 kubelet[2152]: I1108 00:22:36.571490 2152 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:22:36.938675 kubelet[2152]: I1108 00:22:36.938539 2152 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:22:36.938675 kubelet[2152]: I1108 00:22:36.938576 2152 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:22:36.938945 kubelet[2152]: I1108 00:22:36.938922 2152 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:22:36.992448 kubelet[2152]: E1108 00:22:36.992402 2152 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:36.993599 kubelet[2152]: I1108 00:22:36.993582 2152 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:22:37.002636 kubelet[2152]: E1108 00:22:37.002592 2152 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:22:37.002636 kubelet[2152]: I1108 00:22:37.002629 2152 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:22:37.007815 kubelet[2152]: I1108 00:22:37.007788 2152 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:22:37.009895 kubelet[2152]: I1108 00:22:37.009838 2152 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:22:37.010082 kubelet[2152]: I1108 00:22:37.009878 2152 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:22:37.010082 kubelet[2152]: I1108 00:22:37.010076 2152 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:22:37.010082 kubelet[2152]: I1108 00:22:37.010085 2152 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:22:37.010319 kubelet[2152]: I1108 00:22:37.010227 2152 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:22:37.014738 kubelet[2152]: I1108 00:22:37.014705 2152 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:22:37.014807 kubelet[2152]: I1108 00:22:37.014781 2152 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:22:37.014836 kubelet[2152]: I1108 00:22:37.014814 2152 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:22:37.014836 kubelet[2152]: I1108 00:22:37.014828 2152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:22:37.020501 kubelet[2152]: I1108 00:22:37.020462 2152 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:22:37.021267 kubelet[2152]: I1108 00:22:37.020941 2152 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:22:37.023269 kubelet[2152]: W1108 00:22:37.022281 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:37.023269 kubelet[2152]: E1108 00:22:37.022345 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:37.023269 kubelet[2152]: W1108 00:22:37.022334 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:37.023269 kubelet[2152]: E1108 00:22:37.022395 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:37.024563 kubelet[2152]: W1108 00:22:37.024528 2152 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:22:37.027666 kubelet[2152]: I1108 00:22:37.027634 2152 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:22:37.027716 kubelet[2152]: I1108 00:22:37.027682 2152 server.go:1287] "Started kubelet" Nov 8 00:22:37.027924 kubelet[2152]: I1108 00:22:37.027790 2152 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:22:37.028829 kubelet[2152]: I1108 00:22:37.028806 2152 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:22:37.029316 kubelet[2152]: I1108 00:22:37.029291 2152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:22:37.031290 kubelet[2152]: I1108 00:22:37.030345 2152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:22:37.031290 kubelet[2152]: I1108 00:22:37.030424 2152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:22:37.031290 kubelet[2152]: I1108 00:22:37.030670 2152 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:22:37.034747 kubelet[2152]: E1108 00:22:37.032868 2152 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:22:37.034747 kubelet[2152]: I1108 00:22:37.033340 2152 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:22:37.034747 kubelet[2152]: E1108 00:22:37.031705 2152 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0373c76be91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:22:37.027655313 +0000 UTC m=+0.497178978,LastTimestamp:2025-11-08 00:22:37.027655313 +0000 UTC m=+0.497178978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:22:37.034747 kubelet[2152]: I1108 00:22:37.033507 2152 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:22:37.034747 kubelet[2152]: I1108 00:22:37.033554 2152 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:22:37.035019 kubelet[2152]: W1108 00:22:37.034959 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:37.035056 kubelet[2152]: E1108 00:22:37.035016 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:37.035294 kubelet[2152]: E1108 00:22:37.035269 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Nov 8 00:22:37.035755 kubelet[2152]: I1108 00:22:37.035702 2152 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:22:37.035849 kubelet[2152]: I1108 00:22:37.035831 2152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:22:37.035913 kubelet[2152]: E1108 00:22:37.035889 2152 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:22:37.036882 kubelet[2152]: I1108 00:22:37.036865 2152 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:22:37.054782 kubelet[2152]: I1108 00:22:37.054719 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:22:37.055625 kubelet[2152]: I1108 00:22:37.055591 2152 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:22:37.055625 kubelet[2152]: I1108 00:22:37.055615 2152 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:22:37.055730 kubelet[2152]: I1108 00:22:37.055635 2152 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:22:37.057014 kubelet[2152]: I1108 00:22:37.056329 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:22:37.057014 kubelet[2152]: I1108 00:22:37.056356 2152 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:22:37.057014 kubelet[2152]: I1108 00:22:37.056513 2152 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:22:37.057014 kubelet[2152]: I1108 00:22:37.056525 2152 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:22:37.057014 kubelet[2152]: E1108 00:22:37.056582 2152 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:22:37.058585 kubelet[2152]: W1108 00:22:37.058531 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:37.058637 kubelet[2152]: E1108 00:22:37.058589 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:37.133896 kubelet[2152]: E1108 00:22:37.133832 2152 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:22:37.157313 kubelet[2152]: E1108 00:22:37.157263 2152 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:22:37.234637 kubelet[2152]: E1108 00:22:37.234494 2152 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:22:37.236119 kubelet[2152]: E1108 00:22:37.236083 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Nov 8 00:22:37.335510 kubelet[2152]: E1108 00:22:37.335446 2152 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:22:37.357711 kubelet[2152]: E1108 00:22:37.357642 2152 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:22:37.436277 kubelet[2152]: E1108 00:22:37.436195 2152 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:22:37.484176 kubelet[2152]: I1108 00:22:37.484078 2152 policy_none.go:49] "None policy: Start" Nov 8 00:22:37.484176 kubelet[2152]: I1108 00:22:37.484137 2152 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:22:37.484176 kubelet[2152]: I1108 00:22:37.484174 2152 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:22:37.493574 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:22:37.504922 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:22:37.508287 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:22:37.519619 kubelet[2152]: I1108 00:22:37.519566 2152 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:22:37.519874 kubelet[2152]: I1108 00:22:37.519855 2152 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:22:37.519939 kubelet[2152]: I1108 00:22:37.519874 2152 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:22:37.520255 kubelet[2152]: I1108 00:22:37.520185 2152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:22:37.521776 kubelet[2152]: E1108 00:22:37.521741 2152 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:22:37.521821 kubelet[2152]: E1108 00:22:37.521811 2152 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:22:37.551326 kubelet[2152]: E1108 00:22:37.551117 2152 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0373c76be91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:22:37.027655313 +0000 UTC m=+0.497178978,LastTimestamp:2025-11-08 00:22:37.027655313 +0000 UTC m=+0.497178978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:22:37.623018 kubelet[2152]: I1108 00:22:37.622957 2152 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:22:37.623651 kubelet[2152]: E1108 00:22:37.623413 2152 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Nov 8 00:22:37.637503 kubelet[2152]: E1108 00:22:37.637427 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Nov 8 00:22:37.767908 systemd[1]: Created slice kubepods-burstable-pod7621327ec9cdb1a77dbec6d17851b2b7.slice - libcontainer container kubepods-burstable-pod7621327ec9cdb1a77dbec6d17851b2b7.slice. Nov 8 00:22:37.790558 kubelet[2152]: E1108 00:22:37.790522 2152 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:22:37.794864 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 8 00:22:37.797015 kubelet[2152]: E1108 00:22:37.796994 2152 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:22:37.800341 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 8 00:22:37.802071 kubelet[2152]: E1108 00:22:37.802043 2152 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:22:37.825445 kubelet[2152]: I1108 00:22:37.825396 2152 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:22:37.825851 kubelet[2152]: E1108 00:22:37.825812 2152 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Nov 8 00:22:37.838577 kubelet[2152]: I1108 00:22:37.838509 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7621327ec9cdb1a77dbec6d17851b2b7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7621327ec9cdb1a77dbec6d17851b2b7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:37.838577 kubelet[2152]: I1108 00:22:37.838565 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7621327ec9cdb1a77dbec6d17851b2b7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7621327ec9cdb1a77dbec6d17851b2b7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:37.838779 kubelet[2152]: I1108 00:22:37.838603 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7621327ec9cdb1a77dbec6d17851b2b7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7621327ec9cdb1a77dbec6d17851b2b7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:37.838779 kubelet[2152]: I1108 00:22:37.838626 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:37.838779 kubelet[2152]: I1108 00:22:37.838647 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:37.838779 kubelet[2152]: I1108 00:22:37.838666 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:37.838779 kubelet[2152]: I1108 00:22:37.838687 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:37.838931 kubelet[2152]: I1108 00:22:37.838707 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:37.838931 kubelet[2152]: I1108 00:22:37.838731 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:38.009932 kubelet[2152]: W1108 00:22:38.009887 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:38.010092 kubelet[2152]: E1108 00:22:38.009937 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:38.091641 kubelet[2152]: E1108 00:22:38.091476 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:38.092501 containerd[1465]: time="2025-11-08T00:22:38.092445847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7621327ec9cdb1a77dbec6d17851b2b7,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:38.097632 kubelet[2152]: E1108 00:22:38.097605 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:38.098219 containerd[1465]: time="2025-11-08T00:22:38.098157102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:38.103395 kubelet[2152]: E1108 00:22:38.103357 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:38.103744 containerd[1465]: time="2025-11-08T00:22:38.103704537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:38.109527 kubelet[2152]: W1108 00:22:38.109489 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:38.109576 kubelet[2152]: E1108 00:22:38.109529 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:38.230258 kubelet[2152]: I1108 00:22:38.230204 2152 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:22:38.230706 kubelet[2152]: E1108 00:22:38.230668 2152 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Nov 8 00:22:38.377752 kubelet[2152]: W1108 00:22:38.377572 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:38.377752 kubelet[2152]: E1108 00:22:38.377670 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:38.438834 kubelet[2152]: E1108 00:22:38.438778 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Nov 8 00:22:38.581903 kubelet[2152]: W1108 00:22:38.581824 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Nov 8 00:22:38.581903 kubelet[2152]: E1108 00:22:38.581894 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:38.844471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622106139.mount: Deactivated successfully. Nov 8 00:22:38.848783 containerd[1465]: time="2025-11-08T00:22:38.848738305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:22:38.851785 containerd[1465]: time="2025-11-08T00:22:38.851735779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:22:38.852653 containerd[1465]: time="2025-11-08T00:22:38.852592932Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:22:38.853621 containerd[1465]: time="2025-11-08T00:22:38.853581203Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:22:38.854450 containerd[1465]: time="2025-11-08T00:22:38.854415846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:22:38.855297 containerd[1465]: time="2025-11-08T00:22:38.855246881Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:22:38.856282 containerd[1465]: time="2025-11-08T00:22:38.856218555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:22:38.858404 containerd[1465]: time="2025-11-08T00:22:38.858364479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:22:38.860691 containerd[1465]: time="2025-11-08T00:22:38.860648937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 768.090873ms" Nov 8 00:22:38.862808 containerd[1465]: time="2025-11-08T00:22:38.862761707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 758.983438ms" Nov 8 00:22:38.863391 containerd[1465]: time="2025-11-08T00:22:38.863360443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.078095ms" Nov 8 00:22:39.060371 kubelet[2152]: I1108 00:22:39.033336 2152 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:22:39.060371 kubelet[2152]: E1108 00:22:39.033617 2152 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Nov 8 00:22:39.072137 containerd[1465]: time="2025-11-08T00:22:39.071364239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:39.072137 containerd[1465]: time="2025-11-08T00:22:39.071438329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:39.072137 containerd[1465]: time="2025-11-08T00:22:39.071452169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.072137 containerd[1465]: time="2025-11-08T00:22:39.071535187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.074804 containerd[1465]: time="2025-11-08T00:22:39.074711617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:39.079481 containerd[1465]: time="2025-11-08T00:22:39.078285663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:39.079481 containerd[1465]: time="2025-11-08T00:22:39.078309084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.079481 containerd[1465]: time="2025-11-08T00:22:39.078399991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.086208 containerd[1465]: time="2025-11-08T00:22:39.086071082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:39.092684 containerd[1465]: time="2025-11-08T00:22:39.092447742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:39.092684 containerd[1465]: time="2025-11-08T00:22:39.092475362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.092684 containerd[1465]: time="2025-11-08T00:22:39.092579105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.186399 kubelet[2152]: E1108 00:22:39.185809 2152 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:22:39.236717 systemd[1]: Started cri-containerd-6f05820defeb285e5a0b20f4ad2a12d4a527c246d39578b7b4dc8198ea10da71.scope - libcontainer container 6f05820defeb285e5a0b20f4ad2a12d4a527c246d39578b7b4dc8198ea10da71. Nov 8 00:22:39.242479 systemd[1]: Started cri-containerd-91c0261a47d32e9482b80be89a48a94c418bebbb03a7f2968b4448eaab52d50b.scope - libcontainer container 91c0261a47d32e9482b80be89a48a94c418bebbb03a7f2968b4448eaab52d50b. Nov 8 00:22:39.244374 systemd[1]: Started cri-containerd-9e258335dacbe14f65772c3b7fee2427e4cb3490a6288f9ad5d31f0be0d5a890.scope - libcontainer container 9e258335dacbe14f65772c3b7fee2427e4cb3490a6288f9ad5d31f0be0d5a890. Nov 8 00:22:39.290041 containerd[1465]: time="2025-11-08T00:22:39.289863313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7621327ec9cdb1a77dbec6d17851b2b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c0261a47d32e9482b80be89a48a94c418bebbb03a7f2968b4448eaab52d50b\"" Nov 8 00:22:39.290792 kubelet[2152]: E1108 00:22:39.290758 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:39.293272 containerd[1465]: time="2025-11-08T00:22:39.292800427Z" level=info msg="CreateContainer within sandbox \"91c0261a47d32e9482b80be89a48a94c418bebbb03a7f2968b4448eaab52d50b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:22:39.304136 containerd[1465]: time="2025-11-08T00:22:39.304093689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f05820defeb285e5a0b20f4ad2a12d4a527c246d39578b7b4dc8198ea10da71\"" Nov 8 00:22:39.305035 kubelet[2152]: E1108 00:22:39.304992 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:39.307864 containerd[1465]: time="2025-11-08T00:22:39.307825747Z" level=info msg="CreateContainer within sandbox \"6f05820defeb285e5a0b20f4ad2a12d4a527c246d39578b7b4dc8198ea10da71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:22:39.309387 containerd[1465]: time="2025-11-08T00:22:39.309347519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e258335dacbe14f65772c3b7fee2427e4cb3490a6288f9ad5d31f0be0d5a890\"" Nov 8 00:22:39.313953 kubelet[2152]: E1108 00:22:39.313926 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:39.315942 containerd[1465]: time="2025-11-08T00:22:39.315906652Z" level=info msg="CreateContainer within sandbox \"9e258335dacbe14f65772c3b7fee2427e4cb3490a6288f9ad5d31f0be0d5a890\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:22:39.325725 containerd[1465]: time="2025-11-08T00:22:39.325682482Z" level=info msg="CreateContainer within sandbox \"91c0261a47d32e9482b80be89a48a94c418bebbb03a7f2968b4448eaab52d50b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e920eff88adf72d33a40547308e94e5945693366bf44edbc38e0ff4ecced7d8e\"" Nov 8 00:22:39.326192 containerd[1465]: time="2025-11-08T00:22:39.326158550Z" level=info msg="StartContainer for \"e920eff88adf72d33a40547308e94e5945693366bf44edbc38e0ff4ecced7d8e\"" Nov 8 00:22:39.336159 containerd[1465]: time="2025-11-08T00:22:39.336104527Z" level=info msg="CreateContainer within sandbox \"6f05820defeb285e5a0b20f4ad2a12d4a527c246d39578b7b4dc8198ea10da71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2d9f72b4fe1b1001981214ee0c2b74fc850668a49ff873620d2343a27d36fe84\"" Nov 8 00:22:39.336933 containerd[1465]: time="2025-11-08T00:22:39.336902116Z" level=info msg="StartContainer for \"2d9f72b4fe1b1001981214ee0c2b74fc850668a49ff873620d2343a27d36fe84\"" Nov 8 00:22:39.337441 containerd[1465]: time="2025-11-08T00:22:39.337415614Z" level=info msg="CreateContainer within sandbox \"9e258335dacbe14f65772c3b7fee2427e4cb3490a6288f9ad5d31f0be0d5a890\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa86031cf039d62f3ddbb1ca2f8136b760dc1cac63bb65f5af9f7fa06535e7d1\"" Nov 8 00:22:39.337711 containerd[1465]: time="2025-11-08T00:22:39.337675334Z" level=info msg="StartContainer for \"aa86031cf039d62f3ddbb1ca2f8136b760dc1cac63bb65f5af9f7fa06535e7d1\"" Nov 8 00:22:39.361499 systemd[1]: Started cri-containerd-e920eff88adf72d33a40547308e94e5945693366bf44edbc38e0ff4ecced7d8e.scope - libcontainer container e920eff88adf72d33a40547308e94e5945693366bf44edbc38e0ff4ecced7d8e. Nov 8 00:22:39.365805 systemd[1]: Started cri-containerd-aa86031cf039d62f3ddbb1ca2f8136b760dc1cac63bb65f5af9f7fa06535e7d1.scope - libcontainer container aa86031cf039d62f3ddbb1ca2f8136b760dc1cac63bb65f5af9f7fa06535e7d1. Nov 8 00:22:39.369271 systemd[1]: Started cri-containerd-2d9f72b4fe1b1001981214ee0c2b74fc850668a49ff873620d2343a27d36fe84.scope - libcontainer container 2d9f72b4fe1b1001981214ee0c2b74fc850668a49ff873620d2343a27d36fe84. Nov 8 00:22:39.423545 containerd[1465]: time="2025-11-08T00:22:39.423490974Z" level=info msg="StartContainer for \"aa86031cf039d62f3ddbb1ca2f8136b760dc1cac63bb65f5af9f7fa06535e7d1\" returns successfully" Nov 8 00:22:39.423670 containerd[1465]: time="2025-11-08T00:22:39.423609670Z" level=info msg="StartContainer for \"e920eff88adf72d33a40547308e94e5945693366bf44edbc38e0ff4ecced7d8e\" returns successfully" Nov 8 00:22:39.429118 containerd[1465]: time="2025-11-08T00:22:39.429081509Z" level=info msg="StartContainer for \"2d9f72b4fe1b1001981214ee0c2b74fc850668a49ff873620d2343a27d36fe84\" returns successfully" Nov 8 00:22:40.070880 kubelet[2152]: E1108 00:22:40.070832 2152 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:22:40.071337 kubelet[2152]: E1108 00:22:40.070969 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:40.073705 kubelet[2152]: E1108 00:22:40.073675 2152 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:22:40.074048 kubelet[2152]: E1108 00:22:40.074024 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:40.075225 kubelet[2152]: E1108 00:22:40.075197 2152 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:22:40.075331 kubelet[2152]: E1108 00:22:40.075309 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:40.453946 kubelet[2152]: E1108 00:22:40.453819 2152 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:22:40.635741 kubelet[2152]: I1108 00:22:40.635692 2152 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:22:40.641177 kubelet[2152]: I1108 00:22:40.641144 2152 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:22:40.735382 kubelet[2152]: I1108 00:22:40.735281 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:40.739438 kubelet[2152]: E1108 00:22:40.739410 2152 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:40.739438 kubelet[2152]: I1108 00:22:40.739439 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:40.740975 kubelet[2152]: E1108 00:22:40.740947 2152 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:40.740975 kubelet[2152]: I1108 00:22:40.740966 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:40.742228 kubelet[2152]: E1108 00:22:40.742200 2152 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:41.017556 kubelet[2152]: I1108 00:22:41.017379 2152 apiserver.go:52] "Watching apiserver" Nov 8 00:22:41.034559 kubelet[2152]: I1108 00:22:41.034524 2152 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:22:41.075938 kubelet[2152]: I1108 00:22:41.075911 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:41.076372 kubelet[2152]: I1108 00:22:41.076024 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:41.077864 kubelet[2152]: E1108 00:22:41.077829 2152 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:41.078017 kubelet[2152]: E1108 00:22:41.077991 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:41.078616 kubelet[2152]: E1108 00:22:41.078556 2152 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:41.078800 kubelet[2152]: E1108 00:22:41.078755 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:42.276926 kubelet[2152]: I1108 00:22:42.276876 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:42.283667 kubelet[2152]: E1108 00:22:42.283611 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:42.474092 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-7.scope)... Nov 8 00:22:42.474107 systemd[1]: Reloading... Nov 8 00:22:42.567280 zram_generator::config[2473]: No configuration found. Nov 8 00:22:42.700271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:22:42.794478 systemd[1]: Reloading finished in 319 ms. Nov 8 00:22:42.837078 kubelet[2152]: I1108 00:22:42.834966 2152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:42.842987 kubelet[2152]: E1108 00:22:42.842953 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:42.849609 kubelet[2152]: I1108 00:22:42.849484 2152 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:22:42.849609 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:42.867011 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:22:42.867324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:42.867382 systemd[1]: kubelet.service: Consumed 1.136s CPU time, 132.0M memory peak, 0B memory swap peak. Nov 8 00:22:42.879530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:22:43.057183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:22:43.062831 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:22:43.116472 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:22:43.116472 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:22:43.116472 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:22:43.116929 kubelet[2515]: I1108 00:22:43.116568 2515 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:22:43.123563 kubelet[2515]: I1108 00:22:43.123510 2515 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:22:43.123563 kubelet[2515]: I1108 00:22:43.123540 2515 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:22:43.123841 kubelet[2515]: I1108 00:22:43.123809 2515 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:22:43.124975 kubelet[2515]: I1108 00:22:43.124943 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:22:43.127716 kubelet[2515]: I1108 00:22:43.127679 2515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:22:43.135270 kubelet[2515]: E1108 00:22:43.132743 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:22:43.135270 kubelet[2515]: I1108 00:22:43.132795 2515 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:22:43.139015 kubelet[2515]: I1108 00:22:43.138975 2515 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:22:43.139393 kubelet[2515]: I1108 00:22:43.139349 2515 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:22:43.139614 kubelet[2515]: I1108 00:22:43.139392 2515 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:22:43.139614 kubelet[2515]: I1108 00:22:43.139590 2515 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:22:43.139614 kubelet[2515]: I1108 00:22:43.139611 2515 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:22:43.139778 kubelet[2515]: I1108 00:22:43.139677 2515 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:22:43.139934 kubelet[2515]: I1108 00:22:43.139900 2515 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:22:43.139934 kubelet[2515]: I1108 00:22:43.139930 2515 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:22:43.140013 kubelet[2515]: I1108 00:22:43.139953 2515 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:22:43.140013 kubelet[2515]: I1108 00:22:43.139966 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:22:43.141672 kubelet[2515]: I1108 00:22:43.141106 2515 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:22:43.141775 kubelet[2515]: I1108 00:22:43.141684 2515 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:22:43.142441 kubelet[2515]: I1108 00:22:43.142365 2515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:22:43.142441 kubelet[2515]: I1108 00:22:43.142411 2515 server.go:1287] "Started kubelet" Nov 8 00:22:43.145393 kubelet[2515]: I1108 00:22:43.143423 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:22:43.145393 kubelet[2515]: I1108 00:22:43.143802 2515 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:22:43.145393 kubelet[2515]: I1108 00:22:43.143860 2515 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:22:43.145393 kubelet[2515]: I1108 00:22:43.145114 2515 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:22:43.147718 kubelet[2515]: E1108 00:22:43.147680 2515 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:22:43.150624 kubelet[2515]: I1108 00:22:43.150576 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:22:43.156353 kubelet[2515]: I1108 00:22:43.154789 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:22:43.156353 kubelet[2515]: I1108 00:22:43.155537 2515 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:22:43.156353 kubelet[2515]: I1108 00:22:43.155660 2515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:22:43.156353 kubelet[2515]: I1108 00:22:43.155850 2515 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:22:43.159206 kubelet[2515]: I1108 00:22:43.159162 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:22:43.161859 kubelet[2515]: I1108 00:22:43.161817 2515 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:22:43.162098 kubelet[2515]: I1108 00:22:43.162015 2515 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:22:43.169978 kubelet[2515]: I1108 00:22:43.169938 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:22:43.171541 kubelet[2515]: I1108 00:22:43.171519 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:22:43.171621 kubelet[2515]: I1108 00:22:43.171553 2515 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:22:43.171621 kubelet[2515]: I1108 00:22:43.171575 2515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:22:43.171621 kubelet[2515]: I1108 00:22:43.171584 2515 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:22:43.171762 kubelet[2515]: E1108 00:22:43.171646 2515 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:22:43.205136 kubelet[2515]: I1108 00:22:43.205096 2515 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:22:43.205136 kubelet[2515]: I1108 00:22:43.205116 2515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:22:43.205136 kubelet[2515]: I1108 00:22:43.205137 2515 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:22:43.205369 kubelet[2515]: I1108 00:22:43.205329 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:22:43.205369 kubelet[2515]: I1108 00:22:43.205342 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:22:43.205369 kubelet[2515]: I1108 00:22:43.205369 2515 policy_none.go:49] "None policy: Start" Nov 8 00:22:43.205471 kubelet[2515]: I1108 00:22:43.205382 2515 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:22:43.205471 kubelet[2515]: I1108 00:22:43.205399 2515 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:22:43.208251 kubelet[2515]: I1108 00:22:43.205591 2515 state_mem.go:75] "Updated machine memory state" Nov 8 00:22:43.212508 kubelet[2515]: I1108 00:22:43.212478 2515 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:22:43.212718 kubelet[2515]: I1108 00:22:43.212669 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:22:43.212718 kubelet[2515]: I1108 00:22:43.212691 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:22:43.213943 kubelet[2515]: E1108 00:22:43.213594 2515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:22:43.216492 kubelet[2515]: I1108 00:22:43.215343 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:22:43.273292 kubelet[2515]: I1108 00:22:43.273179 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.273563 kubelet[2515]: I1108 00:22:43.273327 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:43.273707 kubelet[2515]: I1108 00:22:43.273365 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:43.281278 kubelet[2515]: E1108 00:22:43.281148 2515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:43.281332 kubelet[2515]: E1108 00:22:43.281318 2515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.319655 kubelet[2515]: I1108 00:22:43.319490 2515 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:22:43.325923 kubelet[2515]: I1108 00:22:43.325881 2515 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:22:43.326058 kubelet[2515]: I1108 00:22:43.325953 2515 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:22:43.357299 kubelet[2515]: I1108 00:22:43.357230 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7621327ec9cdb1a77dbec6d17851b2b7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7621327ec9cdb1a77dbec6d17851b2b7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:43.357299 kubelet[2515]: I1108 00:22:43.357295 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.357459 kubelet[2515]: I1108 00:22:43.357322 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.357459 kubelet[2515]: I1108 00:22:43.357344 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.357459 kubelet[2515]: I1108 00:22:43.357366 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7621327ec9cdb1a77dbec6d17851b2b7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7621327ec9cdb1a77dbec6d17851b2b7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:43.357459 kubelet[2515]: I1108 00:22:43.357386 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.357459 kubelet[2515]: I1108 00:22:43.357407 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:22:43.357646 kubelet[2515]: I1108 00:22:43.357429 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:43.357646 kubelet[2515]: I1108 00:22:43.357450 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7621327ec9cdb1a77dbec6d17851b2b7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7621327ec9cdb1a77dbec6d17851b2b7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:22:43.475746 sudo[2551]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 8 00:22:43.476188 sudo[2551]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 8 00:22:43.582465 kubelet[2515]: E1108 00:22:43.582333 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:43.582591 kubelet[2515]: E1108 00:22:43.582529 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:43.582903 kubelet[2515]: E1108 00:22:43.582840 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:43.981065 sudo[2551]: pam_unix(sudo:session): session closed for user root Nov 8 00:22:44.140910 kubelet[2515]: I1108 00:22:44.140863 2515 apiserver.go:52] "Watching apiserver" Nov 8 00:22:44.156324 kubelet[2515]: I1108 00:22:44.156287 2515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:22:44.185156 kubelet[2515]: I1108 00:22:44.184920 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:44.185156 kubelet[2515]: E1108 00:22:44.184966 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:44.185280 kubelet[2515]: E1108 00:22:44.185253 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:44.332443 kubelet[2515]: E1108 00:22:44.331777 2515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:22:44.332443 kubelet[2515]: E1108 00:22:44.332017 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:44.340475 kubelet[2515]: I1108 00:22:44.340410 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.340389605 podStartE2EDuration="1.340389605s" podCreationTimestamp="2025-11-08 00:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:44.332077095 +0000 UTC m=+1.263495426" watchObservedRunningTime="2025-11-08 00:22:44.340389605 +0000 UTC m=+1.271807947" Nov 8 00:22:44.347275 kubelet[2515]: I1108 00:22:44.346964 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.3469566090000002 podStartE2EDuration="2.346956609s" podCreationTimestamp="2025-11-08 00:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:44.346788215 +0000 UTC m=+1.278206556" watchObservedRunningTime="2025-11-08 00:22:44.346956609 +0000 UTC m=+1.278374950" Nov 8 00:22:44.347275 kubelet[2515]: I1108 00:22:44.347025 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.347021612 podStartE2EDuration="2.347021612s" podCreationTimestamp="2025-11-08 00:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:44.340879086 +0000 UTC m=+1.272297427" watchObservedRunningTime="2025-11-08 00:22:44.347021612 +0000 UTC m=+1.278439953" Nov 8 00:22:45.185998 kubelet[2515]: E1108 00:22:45.185887 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:45.186437 kubelet[2515]: E1108 00:22:45.186008 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:45.464996 sudo[1649]: pam_unix(sudo:session): session closed for user root Nov 8 00:22:45.467148 sshd[1645]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:45.472037 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:57524.service: Deactivated successfully. Nov 8 00:22:45.474356 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:22:45.474574 systemd[1]: session-7.scope: Consumed 5.909s CPU time, 158.3M memory peak, 0B memory swap peak. Nov 8 00:22:45.475016 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:22:45.475911 systemd-logind[1449]: Removed session 7. Nov 8 00:22:45.903871 kubelet[2515]: E1108 00:22:45.903713 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:46.187603 kubelet[2515]: E1108 00:22:46.187474 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:46.187603 kubelet[2515]: E1108 00:22:46.187510 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:48.451441 kubelet[2515]: I1108 00:22:48.451388 2515 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:22:48.451936 kubelet[2515]: I1108 00:22:48.451891 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:22:48.451967 containerd[1465]: time="2025-11-08T00:22:48.451714401Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:22:49.339726 systemd[1]: Created slice kubepods-besteffort-pod3d35c555_2853_40a3_a47e_b1866d098e3b.slice - libcontainer container kubepods-besteffort-pod3d35c555_2853_40a3_a47e_b1866d098e3b.slice. Nov 8 00:22:49.354082 systemd[1]: Created slice kubepods-burstable-pod5226699a_f3f6_4b74_91e8_37c9e46225d1.slice - libcontainer container kubepods-burstable-pod5226699a_f3f6_4b74_91e8_37c9e46225d1.slice. Nov 8 00:22:49.395293 kubelet[2515]: I1108 00:22:49.395033 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-config-path\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395293 kubelet[2515]: I1108 00:22:49.395110 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d35c555-2853-40a3-a47e-b1866d098e3b-lib-modules\") pod \"kube-proxy-pr2lh\" (UID: \"3d35c555-2853-40a3-a47e-b1866d098e3b\") " pod="kube-system/kube-proxy-pr2lh" Nov 8 00:22:49.395293 kubelet[2515]: I1108 00:22:49.395136 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-xtables-lock\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395293 kubelet[2515]: I1108 00:22:49.395158 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d35c555-2853-40a3-a47e-b1866d098e3b-kube-proxy\") pod \"kube-proxy-pr2lh\" (UID: \"3d35c555-2853-40a3-a47e-b1866d098e3b\") " pod="kube-system/kube-proxy-pr2lh" Nov 8 00:22:49.395293 kubelet[2515]: I1108 00:22:49.395179 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d35c555-2853-40a3-a47e-b1866d098e3b-xtables-lock\") pod \"kube-proxy-pr2lh\" (UID: \"3d35c555-2853-40a3-a47e-b1866d098e3b\") " pod="kube-system/kube-proxy-pr2lh" Nov 8 00:22:49.395293 kubelet[2515]: I1108 00:22:49.395200 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-lib-modules\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395645 kubelet[2515]: I1108 00:22:49.395326 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5226699a-f3f6-4b74-91e8-37c9e46225d1-clustermesh-secrets\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395645 kubelet[2515]: I1108 00:22:49.395441 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ldz\" (UniqueName: \"kubernetes.io/projected/3d35c555-2853-40a3-a47e-b1866d098e3b-kube-api-access-q6ldz\") pod \"kube-proxy-pr2lh\" (UID: \"3d35c555-2853-40a3-a47e-b1866d098e3b\") " pod="kube-system/kube-proxy-pr2lh" Nov 8 00:22:49.395645 kubelet[2515]: I1108 00:22:49.395492 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-hostproc\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395645 kubelet[2515]: I1108 00:22:49.395510 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-hubble-tls\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395645 kubelet[2515]: I1108 00:22:49.395526 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-run\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395645 kubelet[2515]: I1108 00:22:49.395550 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-bpf-maps\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395855 kubelet[2515]: I1108 00:22:49.395565 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cni-path\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395855 kubelet[2515]: I1108 00:22:49.395581 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgvk\" (UniqueName: \"kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-kube-api-access-ccgvk\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395855 kubelet[2515]: I1108 00:22:49.395600 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-etc-cni-netd\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395855 kubelet[2515]: I1108 00:22:49.395619 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-kernel\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395855 kubelet[2515]: I1108 00:22:49.395635 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-cgroup\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.395855 kubelet[2515]: I1108 00:22:49.395652 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-net\") pod \"cilium-5gzdb\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " pod="kube-system/cilium-5gzdb" Nov 8 00:22:49.435417 systemd[1]: Created slice kubepods-besteffort-pod9f457daf_2e29_4296_8a97_a781b779b90e.slice - libcontainer container kubepods-besteffort-pod9f457daf_2e29_4296_8a97_a781b779b90e.slice. Nov 8 00:22:49.496071 kubelet[2515]: I1108 00:22:49.495997 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnb25\" (UniqueName: \"kubernetes.io/projected/9f457daf-2e29-4296-8a97-a781b779b90e-kube-api-access-mnb25\") pod \"cilium-operator-6c4d7847fc-w2gd5\" (UID: \"9f457daf-2e29-4296-8a97-a781b779b90e\") " pod="kube-system/cilium-operator-6c4d7847fc-w2gd5" Nov 8 00:22:49.497124 kubelet[2515]: I1108 00:22:49.496170 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f457daf-2e29-4296-8a97-a781b779b90e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w2gd5\" (UID: \"9f457daf-2e29-4296-8a97-a781b779b90e\") " pod="kube-system/cilium-operator-6c4d7847fc-w2gd5" Nov 8 00:22:49.649975 kubelet[2515]: E1108 00:22:49.649831 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.650619 containerd[1465]: time="2025-11-08T00:22:49.650511577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pr2lh,Uid:3d35c555-2853-40a3-a47e-b1866d098e3b,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:49.658529 kubelet[2515]: E1108 00:22:49.658485 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.659077 containerd[1465]: time="2025-11-08T00:22:49.659040326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5gzdb,Uid:5226699a-f3f6-4b74-91e8-37c9e46225d1,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:49.738454 kubelet[2515]: E1108 00:22:49.738407 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.739106 containerd[1465]: time="2025-11-08T00:22:49.739065649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w2gd5,Uid:9f457daf-2e29-4296-8a97-a781b779b90e,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:49.813921 containerd[1465]: time="2025-11-08T00:22:49.813784570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:49.814070 containerd[1465]: time="2025-11-08T00:22:49.813944100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:49.814070 containerd[1465]: time="2025-11-08T00:22:49.813986835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:49.814833 containerd[1465]: time="2025-11-08T00:22:49.814730386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:49.819615 containerd[1465]: time="2025-11-08T00:22:49.819406014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:49.820662 containerd[1465]: time="2025-11-08T00:22:49.820404414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:49.820662 containerd[1465]: time="2025-11-08T00:22:49.820447861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:49.820662 containerd[1465]: time="2025-11-08T00:22:49.820536839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:49.824815 containerd[1465]: time="2025-11-08T00:22:49.824563878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:49.824815 containerd[1465]: time="2025-11-08T00:22:49.824629520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:49.824815 containerd[1465]: time="2025-11-08T00:22:49.824640482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:49.824815 containerd[1465]: time="2025-11-08T00:22:49.824703348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:49.842530 systemd[1]: Started cri-containerd-12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f.scope - libcontainer container 12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f. Nov 8 00:22:49.847439 systemd[1]: Started cri-containerd-1d8104fb2aee9c6619c1a7c9070b946db472b858238674983d33a9e5b7c81584.scope - libcontainer container 1d8104fb2aee9c6619c1a7c9070b946db472b858238674983d33a9e5b7c81584. Nov 8 00:22:49.849145 systemd[1]: Started cri-containerd-a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0.scope - libcontainer container a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0. Nov 8 00:22:49.881655 containerd[1465]: time="2025-11-08T00:22:49.881579408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5gzdb,Uid:5226699a-f3f6-4b74-91e8-37c9e46225d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\"" Nov 8 00:22:49.881809 containerd[1465]: time="2025-11-08T00:22:49.881779739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pr2lh,Uid:3d35c555-2853-40a3-a47e-b1866d098e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d8104fb2aee9c6619c1a7c9070b946db472b858238674983d33a9e5b7c81584\"" Nov 8 00:22:49.882795 kubelet[2515]: E1108 00:22:49.882771 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.883094 kubelet[2515]: E1108 00:22:49.883076 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.886819 containerd[1465]: time="2025-11-08T00:22:49.886774707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 8 00:22:49.889194 containerd[1465]: time="2025-11-08T00:22:49.889152592Z" level=info msg="CreateContainer within sandbox \"1d8104fb2aee9c6619c1a7c9070b946db472b858238674983d33a9e5b7c81584\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:22:49.909931 containerd[1465]: time="2025-11-08T00:22:49.909823700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w2gd5,Uid:9f457daf-2e29-4296-8a97-a781b779b90e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0\"" Nov 8 00:22:49.910949 kubelet[2515]: E1108 00:22:49.910900 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.917503 containerd[1465]: time="2025-11-08T00:22:49.917436443Z" level=info msg="CreateContainer within sandbox \"1d8104fb2aee9c6619c1a7c9070b946db472b858238674983d33a9e5b7c81584\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07f978c3da8ece2a1b2fd38b6fddfb342a8e3eff52d635728c62fffb5cabd23c\"" Nov 8 00:22:49.918080 containerd[1465]: time="2025-11-08T00:22:49.918051385Z" level=info msg="StartContainer for \"07f978c3da8ece2a1b2fd38b6fddfb342a8e3eff52d635728c62fffb5cabd23c\"" Nov 8 00:22:49.958436 systemd[1]: Started cri-containerd-07f978c3da8ece2a1b2fd38b6fddfb342a8e3eff52d635728c62fffb5cabd23c.scope - libcontainer container 07f978c3da8ece2a1b2fd38b6fddfb342a8e3eff52d635728c62fffb5cabd23c. Nov 8 00:22:49.990657 containerd[1465]: time="2025-11-08T00:22:49.990601180Z" level=info msg="StartContainer for \"07f978c3da8ece2a1b2fd38b6fddfb342a8e3eff52d635728c62fffb5cabd23c\" returns successfully" Nov 8 00:22:50.196984 kubelet[2515]: E1108 00:22:50.196860 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:50.208060 kubelet[2515]: I1108 00:22:50.207998 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pr2lh" podStartSLOduration=1.207979725 podStartE2EDuration="1.207979725s" podCreationTimestamp="2025-11-08 00:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:50.207794385 +0000 UTC m=+7.139212726" watchObservedRunningTime="2025-11-08 00:22:50.207979725 +0000 UTC m=+7.139398066" Nov 8 00:22:53.630600 update_engine[1453]: I20251108 00:22:53.630518 1453 update_attempter.cc:509] Updating boot flags... Nov 8 00:22:53.684303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2899) Nov 8 00:22:53.728385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2897) Nov 8 00:22:53.766350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2897) Nov 8 00:22:55.079485 kubelet[2515]: E1108 00:22:55.079391 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:55.206212 kubelet[2515]: E1108 00:22:55.206150 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:55.909275 kubelet[2515]: E1108 00:22:55.909226 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:55.940406 kubelet[2515]: E1108 00:22:55.940365 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:56.208206 kubelet[2515]: E1108 00:22:56.208070 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:58.270480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733049676.mount: Deactivated successfully. Nov 8 00:23:00.755580 containerd[1465]: time="2025-11-08T00:23:00.755522678Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:00.756475 containerd[1465]: time="2025-11-08T00:23:00.756402362Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 8 00:23:00.757517 containerd[1465]: time="2025-11-08T00:23:00.757472359Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:00.759390 containerd[1465]: time="2025-11-08T00:23:00.759354649Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.87253403s" Nov 8 00:23:00.759469 containerd[1465]: time="2025-11-08T00:23:00.759394536Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 8 00:23:00.769529 containerd[1465]: time="2025-11-08T00:23:00.769489580Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 8 00:23:00.775087 containerd[1465]: time="2025-11-08T00:23:00.775043779Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:23:00.795009 containerd[1465]: time="2025-11-08T00:23:00.794940151Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\"" Nov 8 00:23:00.796452 containerd[1465]: time="2025-11-08T00:23:00.795403124Z" level=info msg="StartContainer for \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\"" Nov 8 00:23:00.827494 systemd[1]: Started cri-containerd-f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4.scope - libcontainer container f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4. Nov 8 00:23:00.858292 containerd[1465]: time="2025-11-08T00:23:00.858219520Z" level=info msg="StartContainer for \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\" returns successfully" Nov 8 00:23:00.873529 systemd[1]: cri-containerd-f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4.scope: Deactivated successfully. Nov 8 00:23:01.294096 kubelet[2515]: E1108 00:23:01.294017 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:01.341656 containerd[1465]: time="2025-11-08T00:23:01.340971717Z" level=info msg="shim disconnected" id=f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4 namespace=k8s.io Nov 8 00:23:01.341656 containerd[1465]: time="2025-11-08T00:23:01.341029840Z" level=warning msg="cleaning up after shim disconnected" id=f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4 namespace=k8s.io Nov 8 00:23:01.341656 containerd[1465]: time="2025-11-08T00:23:01.341050040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:01.787465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4-rootfs.mount: Deactivated successfully. Nov 8 00:23:02.199839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514054812.mount: Deactivated successfully. Nov 8 00:23:02.296553 kubelet[2515]: E1108 00:23:02.296466 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:02.299534 containerd[1465]: time="2025-11-08T00:23:02.299486874Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:23:02.317825 containerd[1465]: time="2025-11-08T00:23:02.317774584Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\"" Nov 8 00:23:02.318499 containerd[1465]: time="2025-11-08T00:23:02.318465156Z" level=info msg="StartContainer for \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\"" Nov 8 00:23:02.344387 systemd[1]: Started cri-containerd-d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9.scope - libcontainer container d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9. Nov 8 00:23:02.380681 containerd[1465]: time="2025-11-08T00:23:02.380628295Z" level=info msg="StartContainer for \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\" returns successfully" Nov 8 00:23:02.392469 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:02.392837 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:02.392925 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:02.400572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:02.400797 systemd[1]: cri-containerd-d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9.scope: Deactivated successfully. Nov 8 00:23:02.425800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:02.482135 containerd[1465]: time="2025-11-08T00:23:02.482008821Z" level=info msg="shim disconnected" id=d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9 namespace=k8s.io Nov 8 00:23:02.482135 containerd[1465]: time="2025-11-08T00:23:02.482065241Z" level=warning msg="cleaning up after shim disconnected" id=d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9 namespace=k8s.io Nov 8 00:23:02.482135 containerd[1465]: time="2025-11-08T00:23:02.482074208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:02.498352 containerd[1465]: time="2025-11-08T00:23:02.498295583Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:23:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:23:02.608751 containerd[1465]: time="2025-11-08T00:23:02.608692822Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:02.609377 containerd[1465]: time="2025-11-08T00:23:02.609304411Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 8 00:23:02.610468 containerd[1465]: time="2025-11-08T00:23:02.610427853Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:02.611836 containerd[1465]: time="2025-11-08T00:23:02.611783428Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.842255574s" Nov 8 00:23:02.611836 containerd[1465]: time="2025-11-08T00:23:02.611824497Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 8 00:23:02.613856 containerd[1465]: time="2025-11-08T00:23:02.613807993Z" level=info msg="CreateContainer within sandbox \"a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 8 00:23:02.625708 containerd[1465]: time="2025-11-08T00:23:02.625665086Z" level=info msg="CreateContainer within sandbox \"a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\"" Nov 8 00:23:02.627351 containerd[1465]: time="2025-11-08T00:23:02.626033462Z" level=info msg="StartContainer for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\"" Nov 8 00:23:02.655377 systemd[1]: Started cri-containerd-3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb.scope - libcontainer container 3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb. Nov 8 00:23:02.681960 containerd[1465]: time="2025-11-08T00:23:02.681895596Z" level=info msg="StartContainer for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" returns successfully" Nov 8 00:23:03.310735 kubelet[2515]: E1108 00:23:03.310681 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:03.317980 kubelet[2515]: E1108 00:23:03.316825 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:03.324841 containerd[1465]: time="2025-11-08T00:23:03.323479930Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:23:03.402261 containerd[1465]: time="2025-11-08T00:23:03.400039979Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\"" Nov 8 00:23:03.407169 kubelet[2515]: I1108 00:23:03.407073 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w2gd5" podStartSLOduration=1.706620611 podStartE2EDuration="14.407018231s" podCreationTimestamp="2025-11-08 00:22:49 +0000 UTC" firstStartedPulling="2025-11-08 00:22:49.912196774 +0000 UTC m=+6.843615115" lastFinishedPulling="2025-11-08 00:23:02.612594394 +0000 UTC m=+19.544012735" observedRunningTime="2025-11-08 00:23:03.360403943 +0000 UTC m=+20.291822284" watchObservedRunningTime="2025-11-08 00:23:03.407018231 +0000 UTC m=+20.338436572" Nov 8 00:23:03.409567 containerd[1465]: time="2025-11-08T00:23:03.409514735Z" level=info msg="StartContainer for \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\"" Nov 8 00:23:03.466497 systemd[1]: Started cri-containerd-2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3.scope - libcontainer container 2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3. Nov 8 00:23:03.508651 containerd[1465]: time="2025-11-08T00:23:03.508587452Z" level=info msg="StartContainer for \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\" returns successfully" Nov 8 00:23:03.521699 systemd[1]: cri-containerd-2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3.scope: Deactivated successfully. Nov 8 00:23:03.559206 containerd[1465]: time="2025-11-08T00:23:03.559127338Z" level=info msg="shim disconnected" id=2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3 namespace=k8s.io Nov 8 00:23:03.559206 containerd[1465]: time="2025-11-08T00:23:03.559203235Z" level=warning msg="cleaning up after shim disconnected" id=2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3 namespace=k8s.io Nov 8 00:23:03.559477 containerd[1465]: time="2025-11-08T00:23:03.559215188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:03.789475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3-rootfs.mount: Deactivated successfully. Nov 8 00:23:04.320166 kubelet[2515]: E1108 00:23:04.320131 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:04.321322 kubelet[2515]: E1108 00:23:04.320192 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:04.322142 containerd[1465]: time="2025-11-08T00:23:04.322111042Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:23:04.416512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451033452.mount: Deactivated successfully. Nov 8 00:23:04.444949 containerd[1465]: time="2025-11-08T00:23:04.444879132Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\"" Nov 8 00:23:04.445612 containerd[1465]: time="2025-11-08T00:23:04.445533470Z" level=info msg="StartContainer for \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\"" Nov 8 00:23:04.478483 systemd[1]: Started cri-containerd-4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7.scope - libcontainer container 4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7. Nov 8 00:23:04.507499 systemd[1]: cri-containerd-4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7.scope: Deactivated successfully. Nov 8 00:23:04.510178 containerd[1465]: time="2025-11-08T00:23:04.510129338Z" level=info msg="StartContainer for \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\" returns successfully" Nov 8 00:23:04.538616 containerd[1465]: time="2025-11-08T00:23:04.538547118Z" level=info msg="shim disconnected" id=4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7 namespace=k8s.io Nov 8 00:23:04.538616 containerd[1465]: time="2025-11-08T00:23:04.538608647Z" level=warning msg="cleaning up after shim disconnected" id=4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7 namespace=k8s.io Nov 8 00:23:04.538616 containerd[1465]: time="2025-11-08T00:23:04.538619879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:23:04.789403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7-rootfs.mount: Deactivated successfully. Nov 8 00:23:05.324793 kubelet[2515]: E1108 00:23:05.324737 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:05.326765 containerd[1465]: time="2025-11-08T00:23:05.326721532Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:23:05.921313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228582620.mount: Deactivated successfully. Nov 8 00:23:05.986830 containerd[1465]: time="2025-11-08T00:23:05.986749340Z" level=info msg="CreateContainer within sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\"" Nov 8 00:23:05.987415 containerd[1465]: time="2025-11-08T00:23:05.987308683Z" level=info msg="StartContainer for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\"" Nov 8 00:23:06.020370 systemd[1]: Started cri-containerd-0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f.scope - libcontainer container 0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f. Nov 8 00:23:06.094759 containerd[1465]: time="2025-11-08T00:23:06.094548890Z" level=info msg="StartContainer for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" returns successfully" Nov 8 00:23:06.227613 kubelet[2515]: I1108 00:23:06.226585 2515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:23:06.261630 systemd[1]: Created slice kubepods-burstable-pod126400b5_6dfe_43df_8eb7_e1472a33378c.slice - libcontainer container kubepods-burstable-pod126400b5_6dfe_43df_8eb7_e1472a33378c.slice. Nov 8 00:23:06.267770 systemd[1]: Created slice kubepods-burstable-pod31acfc58_1fce_4f03_a974_b1f281b1b236.slice - libcontainer container kubepods-burstable-pod31acfc58_1fce_4f03_a974_b1f281b1b236.slice. Nov 8 00:23:06.291048 kubelet[2515]: I1108 00:23:06.290970 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31acfc58-1fce-4f03-a974-b1f281b1b236-config-volume\") pod \"coredns-668d6bf9bc-tkprb\" (UID: \"31acfc58-1fce-4f03-a974-b1f281b1b236\") " pod="kube-system/coredns-668d6bf9bc-tkprb" Nov 8 00:23:06.291048 kubelet[2515]: I1108 00:23:06.291031 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wls\" (UniqueName: \"kubernetes.io/projected/31acfc58-1fce-4f03-a974-b1f281b1b236-kube-api-access-h2wls\") pod \"coredns-668d6bf9bc-tkprb\" (UID: \"31acfc58-1fce-4f03-a974-b1f281b1b236\") " pod="kube-system/coredns-668d6bf9bc-tkprb" Nov 8 00:23:06.291271 kubelet[2515]: I1108 00:23:06.291064 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nznqk\" (UniqueName: \"kubernetes.io/projected/126400b5-6dfe-43df-8eb7-e1472a33378c-kube-api-access-nznqk\") pod \"coredns-668d6bf9bc-d62xz\" (UID: \"126400b5-6dfe-43df-8eb7-e1472a33378c\") " pod="kube-system/coredns-668d6bf9bc-d62xz" Nov 8 00:23:06.291271 kubelet[2515]: I1108 00:23:06.291087 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/126400b5-6dfe-43df-8eb7-e1472a33378c-config-volume\") pod \"coredns-668d6bf9bc-d62xz\" (UID: \"126400b5-6dfe-43df-8eb7-e1472a33378c\") " pod="kube-system/coredns-668d6bf9bc-d62xz" Nov 8 00:23:06.331053 kubelet[2515]: E1108 00:23:06.331018 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:06.492646 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:44170.service - OpenSSH per-connection server daemon (10.0.0.1:44170). Nov 8 00:23:06.543414 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 44170 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:06.545006 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:06.551301 systemd-logind[1449]: New session 8 of user core. Nov 8 00:23:06.557455 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:23:06.567341 kubelet[2515]: E1108 00:23:06.567280 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:06.569078 containerd[1465]: time="2025-11-08T00:23:06.569031552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d62xz,Uid:126400b5-6dfe-43df-8eb7-e1472a33378c,Namespace:kube-system,Attempt:0,}" Nov 8 00:23:06.572228 kubelet[2515]: E1108 00:23:06.571863 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:06.572397 containerd[1465]: time="2025-11-08T00:23:06.572360247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tkprb,Uid:31acfc58-1fce-4f03-a974-b1f281b1b236,Namespace:kube-system,Attempt:0,}" Nov 8 00:23:06.722029 sshd[3310]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:06.729225 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:44170.service: Deactivated successfully. Nov 8 00:23:06.732101 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:23:06.737730 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:23:06.738748 systemd-logind[1449]: Removed session 8. Nov 8 00:23:07.334915 kubelet[2515]: E1108 00:23:07.333165 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:08.332030 systemd-networkd[1405]: cilium_host: Link UP Nov 8 00:23:08.333182 systemd-networkd[1405]: cilium_net: Link UP Nov 8 00:23:08.334036 systemd-networkd[1405]: cilium_net: Gained carrier Nov 8 00:23:08.334310 systemd-networkd[1405]: cilium_host: Gained carrier Nov 8 00:23:08.338302 kubelet[2515]: E1108 00:23:08.337924 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:08.444971 systemd-networkd[1405]: cilium_vxlan: Link UP Nov 8 00:23:08.444980 systemd-networkd[1405]: cilium_vxlan: Gained carrier Nov 8 00:23:08.665267 kernel: NET: Registered PF_ALG protocol family Nov 8 00:23:08.675515 systemd-networkd[1405]: cilium_net: Gained IPv6LL Nov 8 00:23:09.051436 systemd-networkd[1405]: cilium_host: Gained IPv6LL Nov 8 00:23:09.422053 systemd-networkd[1405]: lxc_health: Link UP Nov 8 00:23:09.422436 systemd-networkd[1405]: lxc_health: Gained carrier Nov 8 00:23:09.499616 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL Nov 8 00:23:09.660287 kubelet[2515]: E1108 00:23:09.660219 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:09.679024 kubelet[2515]: I1108 00:23:09.678656 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5gzdb" podStartSLOduration=9.795634073 podStartE2EDuration="20.678632809s" podCreationTimestamp="2025-11-08 00:22:49 +0000 UTC" firstStartedPulling="2025-11-08 00:22:49.886214444 +0000 UTC m=+6.817632786" lastFinishedPulling="2025-11-08 00:23:00.76921318 +0000 UTC m=+17.700631522" observedRunningTime="2025-11-08 00:23:06.443763771 +0000 UTC m=+23.375182112" watchObservedRunningTime="2025-11-08 00:23:09.678632809 +0000 UTC m=+26.610051150" Nov 8 00:23:09.717539 systemd-networkd[1405]: lxc17f2f26d5a37: Link UP Nov 8 00:23:09.732103 systemd-networkd[1405]: lxc9cf52fc3b8ed: Link UP Nov 8 00:23:09.739268 kernel: eth0: renamed from tmp94121 Nov 8 00:23:09.750812 kernel: eth0: renamed from tmp747c5 Nov 8 00:23:09.764481 systemd-networkd[1405]: lxc17f2f26d5a37: Gained carrier Nov 8 00:23:09.765999 systemd-networkd[1405]: lxc9cf52fc3b8ed: Gained carrier Nov 8 00:23:10.339408 kubelet[2515]: E1108 00:23:10.339343 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:10.587485 systemd-networkd[1405]: lxc_health: Gained IPv6LL Nov 8 00:23:10.907450 systemd-networkd[1405]: lxc9cf52fc3b8ed: Gained IPv6LL Nov 8 00:23:10.971485 systemd-networkd[1405]: lxc17f2f26d5a37: Gained IPv6LL Nov 8 00:23:11.340726 kubelet[2515]: E1108 00:23:11.340582 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:11.740431 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:44172.service - OpenSSH per-connection server daemon (10.0.0.1:44172). Nov 8 00:23:11.790445 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 44172 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:11.792858 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:11.798693 systemd-logind[1449]: New session 9 of user core. Nov 8 00:23:11.804433 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:23:12.078310 sshd[3768]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:12.081746 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:44172.service: Deactivated successfully. Nov 8 00:23:12.084780 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:23:12.087158 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:23:12.088307 systemd-logind[1449]: Removed session 9. Nov 8 00:23:13.331277 containerd[1465]: time="2025-11-08T00:23:13.330114623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:23:13.331277 containerd[1465]: time="2025-11-08T00:23:13.330182964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:23:13.331277 containerd[1465]: time="2025-11-08T00:23:13.330226838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:13.331277 containerd[1465]: time="2025-11-08T00:23:13.330483751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:13.334307 containerd[1465]: time="2025-11-08T00:23:13.333683796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:23:13.334307 containerd[1465]: time="2025-11-08T00:23:13.333865725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:23:13.334307 containerd[1465]: time="2025-11-08T00:23:13.333914238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:13.334307 containerd[1465]: time="2025-11-08T00:23:13.334024971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:13.363423 systemd[1]: Started cri-containerd-747c58e1f1b133231d6b2a1ad9a90339bbbe6424388994a14bd6b541367eec84.scope - libcontainer container 747c58e1f1b133231d6b2a1ad9a90339bbbe6424388994a14bd6b541367eec84. Nov 8 00:23:13.365660 systemd[1]: Started cri-containerd-94121b9e755ed78a041f09ba9eeca7d1ef5d8831e852e018de5b00d6717f89d0.scope - libcontainer container 94121b9e755ed78a041f09ba9eeca7d1ef5d8831e852e018de5b00d6717f89d0. Nov 8 00:23:13.381090 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:23:13.385556 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:23:13.412056 containerd[1465]: time="2025-11-08T00:23:13.411881345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d62xz,Uid:126400b5-6dfe-43df-8eb7-e1472a33378c,Namespace:kube-system,Attempt:0,} returns sandbox id \"94121b9e755ed78a041f09ba9eeca7d1ef5d8831e852e018de5b00d6717f89d0\"" Nov 8 00:23:13.413593 kubelet[2515]: E1108 00:23:13.413565 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:13.418700 containerd[1465]: time="2025-11-08T00:23:13.418643839Z" level=info msg="CreateContainer within sandbox \"94121b9e755ed78a041f09ba9eeca7d1ef5d8831e852e018de5b00d6717f89d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:23:13.420573 containerd[1465]: time="2025-11-08T00:23:13.420525102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tkprb,Uid:31acfc58-1fce-4f03-a974-b1f281b1b236,Namespace:kube-system,Attempt:0,} returns sandbox id \"747c58e1f1b133231d6b2a1ad9a90339bbbe6424388994a14bd6b541367eec84\"" Nov 8 00:23:13.421321 kubelet[2515]: E1108 00:23:13.421203 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:13.424374 containerd[1465]: time="2025-11-08T00:23:13.423943355Z" level=info msg="CreateContainer within sandbox \"747c58e1f1b133231d6b2a1ad9a90339bbbe6424388994a14bd6b541367eec84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:23:13.442439 containerd[1465]: time="2025-11-08T00:23:13.442380156Z" level=info msg="CreateContainer within sandbox \"94121b9e755ed78a041f09ba9eeca7d1ef5d8831e852e018de5b00d6717f89d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a0db5c4a62ee2825fdb858313f11558a1fd6c4ff7958d9dab4fcb624a8b253e\"" Nov 8 00:23:13.442886 containerd[1465]: time="2025-11-08T00:23:13.442858584Z" level=info msg="StartContainer for \"9a0db5c4a62ee2825fdb858313f11558a1fd6c4ff7958d9dab4fcb624a8b253e\"" Nov 8 00:23:13.446719 containerd[1465]: time="2025-11-08T00:23:13.446661487Z" level=info msg="CreateContainer within sandbox \"747c58e1f1b133231d6b2a1ad9a90339bbbe6424388994a14bd6b541367eec84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e879ff8a1dce7efcd3ee05f3ea87880b700df06248b16b51f4a02fe1677da949\"" Nov 8 00:23:13.448152 containerd[1465]: time="2025-11-08T00:23:13.447370889Z" level=info msg="StartContainer for \"e879ff8a1dce7efcd3ee05f3ea87880b700df06248b16b51f4a02fe1677da949\"" Nov 8 00:23:13.483402 systemd[1]: Started cri-containerd-9a0db5c4a62ee2825fdb858313f11558a1fd6c4ff7958d9dab4fcb624a8b253e.scope - libcontainer container 9a0db5c4a62ee2825fdb858313f11558a1fd6c4ff7958d9dab4fcb624a8b253e. Nov 8 00:23:13.484865 systemd[1]: Started cri-containerd-e879ff8a1dce7efcd3ee05f3ea87880b700df06248b16b51f4a02fe1677da949.scope - libcontainer container e879ff8a1dce7efcd3ee05f3ea87880b700df06248b16b51f4a02fe1677da949. Nov 8 00:23:13.523933 containerd[1465]: time="2025-11-08T00:23:13.523884755Z" level=info msg="StartContainer for \"9a0db5c4a62ee2825fdb858313f11558a1fd6c4ff7958d9dab4fcb624a8b253e\" returns successfully" Nov 8 00:23:13.524332 containerd[1465]: time="2025-11-08T00:23:13.523884735Z" level=info msg="StartContainer for \"e879ff8a1dce7efcd3ee05f3ea87880b700df06248b16b51f4a02fe1677da949\" returns successfully" Nov 8 00:23:14.336996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080278476.mount: Deactivated successfully. Nov 8 00:23:14.363771 kubelet[2515]: E1108 00:23:14.363733 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:14.365568 kubelet[2515]: E1108 00:23:14.365529 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:14.407164 kubelet[2515]: I1108 00:23:14.407096 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tkprb" podStartSLOduration=25.407078673 podStartE2EDuration="25.407078673s" podCreationTimestamp="2025-11-08 00:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:23:14.406515632 +0000 UTC m=+31.337933973" watchObservedRunningTime="2025-11-08 00:23:14.407078673 +0000 UTC m=+31.338497014" Nov 8 00:23:14.430019 kubelet[2515]: I1108 00:23:14.429366 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d62xz" podStartSLOduration=25.429345877 podStartE2EDuration="25.429345877s" podCreationTimestamp="2025-11-08 00:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:23:14.418873804 +0000 UTC m=+31.350292145" watchObservedRunningTime="2025-11-08 00:23:14.429345877 +0000 UTC m=+31.360764238" Nov 8 00:23:15.367371 kubelet[2515]: E1108 00:23:15.367335 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:15.367532 kubelet[2515]: E1108 00:23:15.367480 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:16.369409 kubelet[2515]: E1108 00:23:16.369346 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:16.369940 kubelet[2515]: E1108 00:23:16.369464 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:17.093943 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:48882.service - OpenSSH per-connection server daemon (10.0.0.1:48882). Nov 8 00:23:17.137425 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 48882 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:17.139274 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:17.143427 systemd-logind[1449]: New session 10 of user core. Nov 8 00:23:17.159363 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:23:17.297655 sshd[3960]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:17.301700 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:48882.service: Deactivated successfully. Nov 8 00:23:17.303774 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:23:17.304395 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:23:17.305662 systemd-logind[1449]: Removed session 10. Nov 8 00:23:22.314652 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:48892.service - OpenSSH per-connection server daemon (10.0.0.1:48892). Nov 8 00:23:22.356281 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 48892 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:22.358117 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:22.362155 systemd-logind[1449]: New session 11 of user core. Nov 8 00:23:22.372479 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:23:22.491994 sshd[3977]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:22.509479 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:48892.service: Deactivated successfully. Nov 8 00:23:22.511328 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:23:22.512830 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:23:22.521502 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:48904.service - OpenSSH per-connection server daemon (10.0.0.1:48904). Nov 8 00:23:22.522666 systemd-logind[1449]: Removed session 11. Nov 8 00:23:22.558106 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 48904 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:22.559847 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:22.564224 systemd-logind[1449]: New session 12 of user core. Nov 8 00:23:22.572388 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:23:22.742681 sshd[3992]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:22.751615 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:48904.service: Deactivated successfully. Nov 8 00:23:22.755186 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:23:22.758648 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:23:22.764615 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:48908.service - OpenSSH per-connection server daemon (10.0.0.1:48908). Nov 8 00:23:22.767923 systemd-logind[1449]: Removed session 12. Nov 8 00:23:22.797953 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 48908 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:22.802744 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:22.810339 systemd-logind[1449]: New session 13 of user core. Nov 8 00:23:22.819523 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:23:22.949553 sshd[4004]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:22.954203 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:48908.service: Deactivated successfully. Nov 8 00:23:22.956551 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:23:22.957333 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:23:22.958612 systemd-logind[1449]: Removed session 13. Nov 8 00:23:27.964564 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Nov 8 00:23:28.004087 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.006332 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.010795 systemd-logind[1449]: New session 14 of user core. Nov 8 00:23:28.018377 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:23:28.134433 sshd[4020]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.139396 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:56066.service: Deactivated successfully. Nov 8 00:23:28.141735 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:23:28.142573 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:23:28.143476 systemd-logind[1449]: Removed session 14. Nov 8 00:23:33.148015 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:56068.service - OpenSSH per-connection server daemon (10.0.0.1:56068). Nov 8 00:23:33.189030 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 56068 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:33.190983 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:33.195652 systemd-logind[1449]: New session 15 of user core. Nov 8 00:23:33.206428 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:23:33.312403 sshd[4035]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:33.317182 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:56068.service: Deactivated successfully. Nov 8 00:23:33.319374 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:23:33.320014 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:23:33.320946 systemd-logind[1449]: Removed session 15. Nov 8 00:23:38.323469 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:59972.service - OpenSSH per-connection server daemon (10.0.0.1:59972). Nov 8 00:23:38.360850 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 59972 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:38.362688 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:38.366664 systemd-logind[1449]: New session 16 of user core. Nov 8 00:23:38.373386 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:23:38.503879 sshd[4049]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:38.514977 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:59972.service: Deactivated successfully. Nov 8 00:23:38.516780 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:23:38.518136 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:23:38.525510 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:59974.service - OpenSSH per-connection server daemon (10.0.0.1:59974). Nov 8 00:23:38.526276 systemd-logind[1449]: Removed session 16. Nov 8 00:23:38.557702 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 59974 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:38.559208 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:38.563076 systemd-logind[1449]: New session 17 of user core. Nov 8 00:23:38.573356 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:23:39.429436 sshd[4063]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:39.442389 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:59974.service: Deactivated successfully. Nov 8 00:23:39.444366 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:23:39.446464 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:23:39.450958 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:59986.service - OpenSSH per-connection server daemon (10.0.0.1:59986). Nov 8 00:23:39.452612 systemd-logind[1449]: Removed session 17. Nov 8 00:23:39.489178 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 59986 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:39.490716 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:39.495842 systemd-logind[1449]: New session 18 of user core. Nov 8 00:23:39.505595 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:23:40.040558 sshd[4076]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:40.054020 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:59986.service: Deactivated successfully. Nov 8 00:23:40.056527 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:23:40.058176 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:23:40.064636 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:59994.service - OpenSSH per-connection server daemon (10.0.0.1:59994). Nov 8 00:23:40.065712 systemd-logind[1449]: Removed session 18. Nov 8 00:23:40.097777 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 59994 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:40.099410 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:40.103507 systemd-logind[1449]: New session 19 of user core. Nov 8 00:23:40.117376 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:23:40.350368 sshd[4098]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:40.361144 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:59994.service: Deactivated successfully. Nov 8 00:23:40.363644 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:23:40.365519 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:23:40.373586 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Nov 8 00:23:40.374181 systemd-logind[1449]: Removed session 19. Nov 8 00:23:40.406043 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:40.407714 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:40.412130 systemd-logind[1449]: New session 20 of user core. Nov 8 00:23:40.419395 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:23:40.539046 sshd[4111]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:40.544005 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:60006.service: Deactivated successfully. Nov 8 00:23:40.546223 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:23:40.547270 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:23:40.548596 systemd-logind[1449]: Removed session 20. Nov 8 00:23:45.552840 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:60022.service - OpenSSH per-connection server daemon (10.0.0.1:60022). Nov 8 00:23:45.591553 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 60022 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:45.593278 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:45.597715 systemd-logind[1449]: New session 21 of user core. Nov 8 00:23:45.607418 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:23:45.736192 sshd[4128]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:45.740741 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:60022.service: Deactivated successfully. Nov 8 00:23:45.743006 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:23:45.743871 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:23:45.744725 systemd-logind[1449]: Removed session 21. Nov 8 00:23:50.747347 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:38850.service - OpenSSH per-connection server daemon (10.0.0.1:38850). Nov 8 00:23:50.787008 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 38850 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:50.789210 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:50.793618 systemd-logind[1449]: New session 22 of user core. Nov 8 00:23:50.804396 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:23:50.908398 sshd[4146]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:50.912843 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:38850.service: Deactivated successfully. Nov 8 00:23:50.915082 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:23:50.915848 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:23:50.916914 systemd-logind[1449]: Removed session 22. Nov 8 00:23:55.923540 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:38854.service - OpenSSH per-connection server daemon (10.0.0.1:38854). Nov 8 00:23:55.961824 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 38854 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:55.963668 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:55.968014 systemd-logind[1449]: New session 23 of user core. Nov 8 00:23:55.979376 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:23:56.102536 sshd[4161]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:56.105928 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:38854.service: Deactivated successfully. Nov 8 00:23:56.108193 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:23:56.109973 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:23:56.110945 systemd-logind[1449]: Removed session 23. Nov 8 00:24:01.118246 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:36146.service - OpenSSH per-connection server daemon (10.0.0.1:36146). Nov 8 00:24:01.157379 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 36146 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:01.159132 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:01.163466 systemd-logind[1449]: New session 24 of user core. Nov 8 00:24:01.176431 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:24:01.284412 sshd[4176]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:01.298547 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:36146.service: Deactivated successfully. Nov 8 00:24:01.300657 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:24:01.302423 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:24:01.304156 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:36152.service - OpenSSH per-connection server daemon (10.0.0.1:36152). Nov 8 00:24:01.305089 systemd-logind[1449]: Removed session 24. Nov 8 00:24:01.353583 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 36152 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:01.355416 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:01.360077 systemd-logind[1449]: New session 25 of user core. Nov 8 00:24:01.368446 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:24:02.711356 containerd[1465]: time="2025-11-08T00:24:02.711293394Z" level=info msg="StopContainer for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" with timeout 30 (s)" Nov 8 00:24:02.711896 containerd[1465]: time="2025-11-08T00:24:02.711662184Z" level=info msg="Stop container \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" with signal terminated" Nov 8 00:24:02.729684 systemd[1]: cri-containerd-3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb.scope: Deactivated successfully. Nov 8 00:24:02.754649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb-rootfs.mount: Deactivated successfully. Nov 8 00:24:02.755273 containerd[1465]: time="2025-11-08T00:24:02.755102044Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:24:02.757591 containerd[1465]: time="2025-11-08T00:24:02.757564491Z" level=info msg="StopContainer for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" with timeout 2 (s)" Nov 8 00:24:02.757908 containerd[1465]: time="2025-11-08T00:24:02.757887837Z" level=info msg="Stop container \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" with signal terminated" Nov 8 00:24:02.764914 systemd-networkd[1405]: lxc_health: Link DOWN Nov 8 00:24:02.764927 systemd-networkd[1405]: lxc_health: Lost carrier Nov 8 00:24:02.766778 containerd[1465]: time="2025-11-08T00:24:02.766720752Z" level=info msg="shim disconnected" id=3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb namespace=k8s.io Nov 8 00:24:02.766778 containerd[1465]: time="2025-11-08T00:24:02.766775024Z" level=warning msg="cleaning up after shim disconnected" id=3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb namespace=k8s.io Nov 8 00:24:02.766966 containerd[1465]: time="2025-11-08T00:24:02.766786084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:02.786713 containerd[1465]: time="2025-11-08T00:24:02.786650017Z" level=info msg="StopContainer for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" returns successfully" Nov 8 00:24:02.790194 containerd[1465]: time="2025-11-08T00:24:02.790158381Z" level=info msg="StopPodSandbox for \"a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0\"" Nov 8 00:24:02.790286 containerd[1465]: time="2025-11-08T00:24:02.790193747Z" level=info msg="Container to stop \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:24:02.792335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0-shm.mount: Deactivated successfully. Nov 8 00:24:02.795089 systemd[1]: cri-containerd-0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f.scope: Deactivated successfully. Nov 8 00:24:02.795518 systemd[1]: cri-containerd-0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f.scope: Consumed 7.255s CPU time. Nov 8 00:24:02.800540 systemd[1]: cri-containerd-a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0.scope: Deactivated successfully. Nov 8 00:24:02.820591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f-rootfs.mount: Deactivated successfully. Nov 8 00:24:02.825934 containerd[1465]: time="2025-11-08T00:24:02.825844796Z" level=info msg="shim disconnected" id=0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f namespace=k8s.io Nov 8 00:24:02.825934 containerd[1465]: time="2025-11-08T00:24:02.825925217Z" level=warning msg="cleaning up after shim disconnected" id=0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f namespace=k8s.io Nov 8 00:24:02.825934 containerd[1465]: time="2025-11-08T00:24:02.825934915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:02.833057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0-rootfs.mount: Deactivated successfully. Nov 8 00:24:02.837424 containerd[1465]: time="2025-11-08T00:24:02.837361652Z" level=info msg="shim disconnected" id=a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0 namespace=k8s.io Nov 8 00:24:02.837424 containerd[1465]: time="2025-11-08T00:24:02.837417347Z" level=warning msg="cleaning up after shim disconnected" id=a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0 namespace=k8s.io Nov 8 00:24:02.837559 containerd[1465]: time="2025-11-08T00:24:02.837429560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:02.845686 containerd[1465]: time="2025-11-08T00:24:02.845640521Z" level=info msg="StopContainer for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" returns successfully" Nov 8 00:24:02.846130 containerd[1465]: time="2025-11-08T00:24:02.846111843Z" level=info msg="StopPodSandbox for \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\"" Nov 8 00:24:02.846174 containerd[1465]: time="2025-11-08T00:24:02.846144354Z" level=info msg="Container to stop \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:24:02.846174 containerd[1465]: time="2025-11-08T00:24:02.846156176Z" level=info msg="Container to stop \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:24:02.846174 containerd[1465]: time="2025-11-08T00:24:02.846167778Z" level=info msg="Container to stop \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:24:02.846258 containerd[1465]: time="2025-11-08T00:24:02.846179009Z" level=info msg="Container to stop \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:24:02.846258 containerd[1465]: time="2025-11-08T00:24:02.846189348Z" level=info msg="Container to stop \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:24:02.853495 systemd[1]: cri-containerd-12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f.scope: Deactivated successfully. Nov 8 00:24:02.861628 containerd[1465]: time="2025-11-08T00:24:02.861584741Z" level=info msg="TearDown network for sandbox \"a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0\" successfully" Nov 8 00:24:02.861628 containerd[1465]: time="2025-11-08T00:24:02.861620799Z" level=info msg="StopPodSandbox for \"a67ce73ba45e6407e35ea06e8c06da42f4ae5c551ec876aea12b3eafe87d95a0\" returns successfully" Nov 8 00:24:02.879032 containerd[1465]: time="2025-11-08T00:24:02.878959948Z" level=info msg="shim disconnected" id=12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f namespace=k8s.io Nov 8 00:24:02.879032 containerd[1465]: time="2025-11-08T00:24:02.879022074Z" level=warning msg="cleaning up after shim disconnected" id=12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f namespace=k8s.io Nov 8 00:24:02.879032 containerd[1465]: time="2025-11-08T00:24:02.879032854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:02.894335 containerd[1465]: time="2025-11-08T00:24:02.894278958Z" level=info msg="TearDown network for sandbox \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" successfully" Nov 8 00:24:02.894335 containerd[1465]: time="2025-11-08T00:24:02.894325125Z" level=info msg="StopPodSandbox for \"12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f\" returns successfully" Nov 8 00:24:02.944063 kubelet[2515]: I1108 00:24:02.944008 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-xtables-lock\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944063 kubelet[2515]: I1108 00:24:02.944050 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-config-path\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944063 kubelet[2515]: I1108 00:24:02.944066 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-lib-modules\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944756 kubelet[2515]: I1108 00:24:02.944082 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-hubble-tls\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944756 kubelet[2515]: I1108 00:24:02.944096 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-hostproc\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944756 kubelet[2515]: I1108 00:24:02.944113 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5226699a-f3f6-4b74-91e8-37c9e46225d1-clustermesh-secrets\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944756 kubelet[2515]: I1108 00:24:02.944126 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-bpf-maps\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944756 kubelet[2515]: I1108 00:24:02.944138 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-run\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944756 kubelet[2515]: I1108 00:24:02.944159 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgvk\" (UniqueName: \"kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-kube-api-access-ccgvk\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944963 kubelet[2515]: I1108 00:24:02.944173 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-etc-cni-netd\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944963 kubelet[2515]: I1108 00:24:02.944188 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnb25\" (UniqueName: \"kubernetes.io/projected/9f457daf-2e29-4296-8a97-a781b779b90e-kube-api-access-mnb25\") pod \"9f457daf-2e29-4296-8a97-a781b779b90e\" (UID: \"9f457daf-2e29-4296-8a97-a781b779b90e\") " Nov 8 00:24:02.944963 kubelet[2515]: I1108 00:24:02.944206 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f457daf-2e29-4296-8a97-a781b779b90e-cilium-config-path\") pod \"9f457daf-2e29-4296-8a97-a781b779b90e\" (UID: \"9f457daf-2e29-4296-8a97-a781b779b90e\") " Nov 8 00:24:02.944963 kubelet[2515]: I1108 00:24:02.944221 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-cgroup\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944963 kubelet[2515]: I1108 00:24:02.944254 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cni-path\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.944963 kubelet[2515]: I1108 00:24:02.944268 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-net\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.945190 kubelet[2515]: I1108 00:24:02.944285 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-kernel\") pod \"5226699a-f3f6-4b74-91e8-37c9e46225d1\" (UID: \"5226699a-f3f6-4b74-91e8-37c9e46225d1\") " Nov 8 00:24:02.945190 kubelet[2515]: I1108 00:24:02.944339 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.945190 kubelet[2515]: I1108 00:24:02.944342 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.945190 kubelet[2515]: I1108 00:24:02.944400 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.945190 kubelet[2515]: I1108 00:24:02.944601 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.945660 kubelet[2515]: I1108 00:24:02.945603 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.945660 kubelet[2515]: I1108 00:24:02.945636 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cni-path" (OuterVolumeSpecName: "cni-path") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.945660 kubelet[2515]: I1108 00:24:02.945665 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.946031 kubelet[2515]: I1108 00:24:02.945975 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.946384 kubelet[2515]: I1108 00:24:02.946274 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-hostproc" (OuterVolumeSpecName: "hostproc") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.949138 kubelet[2515]: I1108 00:24:02.948347 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:24:02.949138 kubelet[2515]: I1108 00:24:02.948999 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f457daf-2e29-4296-8a97-a781b779b90e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f457daf-2e29-4296-8a97-a781b779b90e" (UID: "9f457daf-2e29-4296-8a97-a781b779b90e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:24:02.949138 kubelet[2515]: I1108 00:24:02.949104 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f457daf-2e29-4296-8a97-a781b779b90e-kube-api-access-mnb25" (OuterVolumeSpecName: "kube-api-access-mnb25") pod "9f457daf-2e29-4296-8a97-a781b779b90e" (UID: "9f457daf-2e29-4296-8a97-a781b779b90e"). InnerVolumeSpecName "kube-api-access-mnb25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:24:02.949374 kubelet[2515]: I1108 00:24:02.949323 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:24:02.949998 kubelet[2515]: I1108 00:24:02.949968 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-kube-api-access-ccgvk" (OuterVolumeSpecName: "kube-api-access-ccgvk") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "kube-api-access-ccgvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:24:02.951192 kubelet[2515]: I1108 00:24:02.951168 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:24:02.951581 kubelet[2515]: I1108 00:24:02.951559 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5226699a-f3f6-4b74-91e8-37c9e46225d1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5226699a-f3f6-4b74-91e8-37c9e46225d1" (UID: "5226699a-f3f6-4b74-91e8-37c9e46225d1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045003 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045045 2515 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045054 2515 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045065 2515 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045073 2515 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045080 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045090 2515 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045168 kubelet[2515]: I1108 00:24:03.045097 2515 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045105 2515 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045113 2515 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5226699a-f3f6-4b74-91e8-37c9e46225d1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045121 2515 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045128 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045136 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccgvk\" (UniqueName: \"kubernetes.io/projected/5226699a-f3f6-4b74-91e8-37c9e46225d1-kube-api-access-ccgvk\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045145 2515 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5226699a-f3f6-4b74-91e8-37c9e46225d1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045154 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mnb25\" (UniqueName: \"kubernetes.io/projected/9f457daf-2e29-4296-8a97-a781b779b90e-kube-api-access-mnb25\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.045627 kubelet[2515]: I1108 00:24:03.045165 2515 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f457daf-2e29-4296-8a97-a781b779b90e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:03.180189 systemd[1]: Removed slice kubepods-besteffort-pod9f457daf_2e29_4296_8a97_a781b779b90e.slice - libcontainer container kubepods-besteffort-pod9f457daf_2e29_4296_8a97_a781b779b90e.slice. Nov 8 00:24:03.181907 systemd[1]: Removed slice kubepods-burstable-pod5226699a_f3f6_4b74_91e8_37c9e46225d1.slice - libcontainer container kubepods-burstable-pod5226699a_f3f6_4b74_91e8_37c9e46225d1.slice. Nov 8 00:24:03.181999 systemd[1]: kubepods-burstable-pod5226699a_f3f6_4b74_91e8_37c9e46225d1.slice: Consumed 7.367s CPU time. Nov 8 00:24:03.292968 kubelet[2515]: E1108 00:24:03.292875 2515 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:24:03.459745 kubelet[2515]: I1108 00:24:03.459686 2515 scope.go:117] "RemoveContainer" containerID="3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb" Nov 8 00:24:03.461818 containerd[1465]: time="2025-11-08T00:24:03.461712304Z" level=info msg="RemoveContainer for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\"" Nov 8 00:24:03.467212 containerd[1465]: time="2025-11-08T00:24:03.466261208Z" level=info msg="RemoveContainer for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" returns successfully" Nov 8 00:24:03.467488 kubelet[2515]: I1108 00:24:03.467463 2515 scope.go:117] "RemoveContainer" containerID="3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb" Nov 8 00:24:03.472411 containerd[1465]: time="2025-11-08T00:24:03.472364551Z" level=error msg="ContainerStatus for \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\": not found" Nov 8 00:24:03.482738 kubelet[2515]: E1108 00:24:03.482420 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\": not found" containerID="3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb" Nov 8 00:24:03.482738 kubelet[2515]: I1108 00:24:03.482459 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb"} err="failed to get container status \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e888fcc911c47934be7b6e43d332c5a9a369bd2b99eaae27d4379035d2bfedb\": not found" Nov 8 00:24:03.482738 kubelet[2515]: I1108 00:24:03.482536 2515 scope.go:117] "RemoveContainer" containerID="0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f" Nov 8 00:24:03.483632 containerd[1465]: time="2025-11-08T00:24:03.483559964Z" level=info msg="RemoveContainer for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\"" Nov 8 00:24:03.491923 containerd[1465]: time="2025-11-08T00:24:03.491883361Z" level=info msg="RemoveContainer for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" returns successfully" Nov 8 00:24:03.492172 kubelet[2515]: I1108 00:24:03.492134 2515 scope.go:117] "RemoveContainer" containerID="4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7" Nov 8 00:24:03.493880 containerd[1465]: time="2025-11-08T00:24:03.493615944Z" level=info msg="RemoveContainer for \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\"" Nov 8 00:24:03.499514 containerd[1465]: time="2025-11-08T00:24:03.499449993Z" level=info msg="RemoveContainer for \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\" returns successfully" Nov 8 00:24:03.500475 kubelet[2515]: I1108 00:24:03.500447 2515 scope.go:117] "RemoveContainer" containerID="2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3" Nov 8 00:24:03.504091 containerd[1465]: time="2025-11-08T00:24:03.503968069Z" level=info msg="RemoveContainer for \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\"" Nov 8 00:24:03.508412 containerd[1465]: time="2025-11-08T00:24:03.508329552Z" level=info msg="RemoveContainer for \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\" returns successfully" Nov 8 00:24:03.508577 kubelet[2515]: I1108 00:24:03.508555 2515 scope.go:117] "RemoveContainer" containerID="d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9" Nov 8 00:24:03.516703 containerd[1465]: time="2025-11-08T00:24:03.516642289Z" level=info msg="RemoveContainer for \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\"" Nov 8 00:24:03.520222 containerd[1465]: time="2025-11-08T00:24:03.520193566Z" level=info msg="RemoveContainer for \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\" returns successfully" Nov 8 00:24:03.520424 kubelet[2515]: I1108 00:24:03.520398 2515 scope.go:117] "RemoveContainer" containerID="f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4" Nov 8 00:24:03.521609 containerd[1465]: time="2025-11-08T00:24:03.521581905Z" level=info msg="RemoveContainer for \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\"" Nov 8 00:24:03.525083 containerd[1465]: time="2025-11-08T00:24:03.525032332Z" level=info msg="RemoveContainer for \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\" returns successfully" Nov 8 00:24:03.525224 kubelet[2515]: I1108 00:24:03.525176 2515 scope.go:117] "RemoveContainer" containerID="0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f" Nov 8 00:24:03.525387 containerd[1465]: time="2025-11-08T00:24:03.525356819Z" level=error msg="ContainerStatus for \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\": not found" Nov 8 00:24:03.525509 kubelet[2515]: E1108 00:24:03.525481 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\": not found" containerID="0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f" Nov 8 00:24:03.525535 kubelet[2515]: I1108 00:24:03.525514 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f"} err="failed to get container status \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d2755087a86044bfe5a29e8a2b032d0b617fe24c5dc39eb4858c5b770be852f\": not found" Nov 8 00:24:03.525563 kubelet[2515]: I1108 00:24:03.525539 2515 scope.go:117] "RemoveContainer" containerID="4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7" Nov 8 00:24:03.525711 containerd[1465]: time="2025-11-08T00:24:03.525663854Z" level=error msg="ContainerStatus for \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\": not found" Nov 8 00:24:03.525812 kubelet[2515]: E1108 00:24:03.525784 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\": not found" containerID="4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7" Nov 8 00:24:03.525849 kubelet[2515]: I1108 00:24:03.525813 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7"} err="failed to get container status \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a3377c618f2e414f7ab776c1291bba48deed7f42eb4dd078d07cc8391e339c7\": not found" Nov 8 00:24:03.525849 kubelet[2515]: I1108 00:24:03.525836 2515 scope.go:117] "RemoveContainer" containerID="2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3" Nov 8 00:24:03.526078 containerd[1465]: time="2025-11-08T00:24:03.526030570Z" level=error msg="ContainerStatus for \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\": not found" Nov 8 00:24:03.526159 kubelet[2515]: E1108 00:24:03.526135 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\": not found" containerID="2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3" Nov 8 00:24:03.526186 kubelet[2515]: I1108 00:24:03.526156 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3"} err="failed to get container status \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cf2e6d6e729719921d404e7d111bfe9fc50bba02838361adfd9b3826d503ba3\": not found" Nov 8 00:24:03.526186 kubelet[2515]: I1108 00:24:03.526169 2515 scope.go:117] "RemoveContainer" containerID="d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9" Nov 8 00:24:03.526349 containerd[1465]: time="2025-11-08T00:24:03.526317748Z" level=error msg="ContainerStatus for \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\": not found" Nov 8 00:24:03.526434 kubelet[2515]: E1108 00:24:03.526410 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\": not found" containerID="d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9" Nov 8 00:24:03.526467 kubelet[2515]: I1108 00:24:03.526433 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9"} err="failed to get container status \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5cb50e656583ea7296ef4b9d73fed12e045135ff71e94e33345b188f6cbc3c9\": not found" Nov 8 00:24:03.526467 kubelet[2515]: I1108 00:24:03.526447 2515 scope.go:117] "RemoveContainer" containerID="f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4" Nov 8 00:24:03.526596 containerd[1465]: time="2025-11-08T00:24:03.526566293Z" level=error msg="ContainerStatus for \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\": not found" Nov 8 00:24:03.526677 kubelet[2515]: E1108 00:24:03.526655 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\": not found" containerID="f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4" Nov 8 00:24:03.526711 kubelet[2515]: I1108 00:24:03.526676 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4"} err="failed to get container status \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f32830b5736bfa6322b76d0ba8c22c738519db5ad474876709d5179eed49d9a4\": not found" Nov 8 00:24:03.728653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f-rootfs.mount: Deactivated successfully. Nov 8 00:24:03.728794 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12ac87331c80a26af99ce2c0f20e73d8fc5758a49f3321a1faf9348bf01c5b3f-shm.mount: Deactivated successfully. Nov 8 00:24:03.728898 systemd[1]: var-lib-kubelet-pods-9f457daf\x2d2e29\x2d4296\x2d8a97\x2da781b779b90e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmnb25.mount: Deactivated successfully. Nov 8 00:24:03.729010 systemd[1]: var-lib-kubelet-pods-5226699a\x2df3f6\x2d4b74\x2d91e8\x2d37c9e46225d1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 8 00:24:03.729120 systemd[1]: var-lib-kubelet-pods-5226699a\x2df3f6\x2d4b74\x2d91e8\x2d37c9e46225d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccgvk.mount: Deactivated successfully. Nov 8 00:24:03.729260 systemd[1]: var-lib-kubelet-pods-5226699a\x2df3f6\x2d4b74\x2d91e8\x2d37c9e46225d1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 8 00:24:04.676945 sshd[4190]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:04.690465 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:36152.service: Deactivated successfully. Nov 8 00:24:04.692932 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:24:04.694718 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:24:04.696989 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:36156.service - OpenSSH per-connection server daemon (10.0.0.1:36156). Nov 8 00:24:04.698191 systemd-logind[1449]: Removed session 25. Nov 8 00:24:04.749220 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 36156 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:04.751068 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:04.755637 systemd-logind[1449]: New session 26 of user core. Nov 8 00:24:04.764389 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:24:05.175355 kubelet[2515]: I1108 00:24:05.175311 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5226699a-f3f6-4b74-91e8-37c9e46225d1" path="/var/lib/kubelet/pods/5226699a-f3f6-4b74-91e8-37c9e46225d1/volumes" Nov 8 00:24:05.176273 kubelet[2515]: I1108 00:24:05.176210 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f457daf-2e29-4296-8a97-a781b779b90e" path="/var/lib/kubelet/pods/9f457daf-2e29-4296-8a97-a781b779b90e/volumes" Nov 8 00:24:05.233420 sshd[4353]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:05.243394 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:36156.service: Deactivated successfully. Nov 8 00:24:05.245449 kubelet[2515]: I1108 00:24:05.245408 2515 memory_manager.go:355] "RemoveStaleState removing state" podUID="5226699a-f3f6-4b74-91e8-37c9e46225d1" containerName="cilium-agent" Nov 8 00:24:05.245517 kubelet[2515]: I1108 00:24:05.245505 2515 memory_manager.go:355] "RemoveStaleState removing state" podUID="9f457daf-2e29-4296-8a97-a781b779b90e" containerName="cilium-operator" Nov 8 00:24:05.246915 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:24:05.250302 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:24:05.262742 systemd[1]: Started sshd@26-10.0.0.74:22-10.0.0.1:36170.service - OpenSSH per-connection server daemon (10.0.0.1:36170). Nov 8 00:24:05.265821 systemd-logind[1449]: Removed session 26. Nov 8 00:24:05.274034 systemd[1]: Created slice kubepods-burstable-pod09ea7d61_e299_4c72_8796_91e9b89d2959.slice - libcontainer container kubepods-burstable-pod09ea7d61_e299_4c72_8796_91e9b89d2959.slice. Nov 8 00:24:05.299648 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 36170 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:05.301466 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:05.305392 systemd-logind[1449]: New session 27 of user core. Nov 8 00:24:05.314371 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:24:05.357362 kubelet[2515]: I1108 00:24:05.357330 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-bpf-maps\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357436 kubelet[2515]: I1108 00:24:05.357369 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09ea7d61-e299-4c72-8796-91e9b89d2959-clustermesh-secrets\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357461 kubelet[2515]: I1108 00:24:05.357452 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-etc-cni-netd\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357601 kubelet[2515]: I1108 00:24:05.357556 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-hostproc\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357639 kubelet[2515]: I1108 00:24:05.357607 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-cilium-cgroup\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357675 kubelet[2515]: I1108 00:24:05.357640 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-host-proc-sys-net\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357675 kubelet[2515]: I1108 00:24:05.357663 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-cilium-run\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357727 kubelet[2515]: I1108 00:24:05.357696 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-xtables-lock\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357727 kubelet[2515]: I1108 00:24:05.357720 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09ea7d61-e299-4c72-8796-91e9b89d2959-cilium-config-path\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357770 kubelet[2515]: I1108 00:24:05.357740 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-host-proc-sys-kernel\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357822 kubelet[2515]: I1108 00:24:05.357769 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-cni-path\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357847 kubelet[2515]: I1108 00:24:05.357827 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09ea7d61-e299-4c72-8796-91e9b89d2959-cilium-ipsec-secrets\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357875 kubelet[2515]: I1108 00:24:05.357849 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09ea7d61-e299-4c72-8796-91e9b89d2959-lib-modules\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357902 kubelet[2515]: I1108 00:24:05.357868 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09ea7d61-e299-4c72-8796-91e9b89d2959-hubble-tls\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.357929 kubelet[2515]: I1108 00:24:05.357904 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdspr\" (UniqueName: \"kubernetes.io/projected/09ea7d61-e299-4c72-8796-91e9b89d2959-kube-api-access-pdspr\") pod \"cilium-45fqn\" (UID: \"09ea7d61-e299-4c72-8796-91e9b89d2959\") " pod="kube-system/cilium-45fqn" Nov 8 00:24:05.364623 sshd[4366]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:05.376544 systemd[1]: sshd@26-10.0.0.74:22-10.0.0.1:36170.service: Deactivated successfully. Nov 8 00:24:05.378654 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:24:05.380420 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:24:05.381828 systemd[1]: Started sshd@27-10.0.0.74:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Nov 8 00:24:05.382803 systemd-logind[1449]: Removed session 27. Nov 8 00:24:05.419007 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:05.420634 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:05.424834 systemd-logind[1449]: New session 28 of user core. Nov 8 00:24:05.434387 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 8 00:24:05.476695 kubelet[2515]: I1108 00:24:05.476622 2515 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T00:24:05Z","lastTransitionTime":"2025-11-08T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 8 00:24:05.577179 kubelet[2515]: E1108 00:24:05.577121 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:05.577841 containerd[1465]: time="2025-11-08T00:24:05.577786920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-45fqn,Uid:09ea7d61-e299-4c72-8796-91e9b89d2959,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:05.615355 containerd[1465]: time="2025-11-08T00:24:05.615049188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:05.615355 containerd[1465]: time="2025-11-08T00:24:05.615135078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:05.615355 containerd[1465]: time="2025-11-08T00:24:05.615149004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:05.615355 containerd[1465]: time="2025-11-08T00:24:05.615313743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:05.633394 systemd[1]: Started cri-containerd-262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a.scope - libcontainer container 262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a. Nov 8 00:24:05.658338 containerd[1465]: time="2025-11-08T00:24:05.658281066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-45fqn,Uid:09ea7d61-e299-4c72-8796-91e9b89d2959,Namespace:kube-system,Attempt:0,} returns sandbox id \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\"" Nov 8 00:24:05.659074 kubelet[2515]: E1108 00:24:05.659047 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:05.661416 containerd[1465]: time="2025-11-08T00:24:05.661370713Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:24:05.673842 containerd[1465]: time="2025-11-08T00:24:05.673794481Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951\"" Nov 8 00:24:05.674383 containerd[1465]: time="2025-11-08T00:24:05.674349510Z" level=info msg="StartContainer for \"ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951\"" Nov 8 00:24:05.704391 systemd[1]: Started cri-containerd-ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951.scope - libcontainer container ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951. Nov 8 00:24:05.734896 containerd[1465]: time="2025-11-08T00:24:05.734832537Z" level=info msg="StartContainer for \"ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951\" returns successfully" Nov 8 00:24:05.744396 systemd[1]: cri-containerd-ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951.scope: Deactivated successfully. Nov 8 00:24:05.776656 containerd[1465]: time="2025-11-08T00:24:05.776598790Z" level=info msg="shim disconnected" id=ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951 namespace=k8s.io Nov 8 00:24:05.776656 containerd[1465]: time="2025-11-08T00:24:05.776651609Z" level=warning msg="cleaning up after shim disconnected" id=ed9c801fe3d69e75a8becd276aac59f24a463cc4d457bc868f12476662f5d951 namespace=k8s.io Nov 8 00:24:05.776656 containerd[1465]: time="2025-11-08T00:24:05.776660275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:06.474651 kubelet[2515]: E1108 00:24:06.474621 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:06.476828 containerd[1465]: time="2025-11-08T00:24:06.476767808Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:24:06.499349 containerd[1465]: time="2025-11-08T00:24:06.499217438Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1\"" Nov 8 00:24:06.500257 containerd[1465]: time="2025-11-08T00:24:06.499829884Z" level=info msg="StartContainer for \"8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1\"" Nov 8 00:24:06.531361 systemd[1]: Started cri-containerd-8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1.scope - libcontainer container 8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1. Nov 8 00:24:06.559815 containerd[1465]: time="2025-11-08T00:24:06.559774641Z" level=info msg="StartContainer for \"8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1\" returns successfully" Nov 8 00:24:06.566343 systemd[1]: cri-containerd-8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1.scope: Deactivated successfully. Nov 8 00:24:06.591341 containerd[1465]: time="2025-11-08T00:24:06.591273815Z" level=info msg="shim disconnected" id=8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1 namespace=k8s.io Nov 8 00:24:06.591341 containerd[1465]: time="2025-11-08T00:24:06.591326584Z" level=warning msg="cleaning up after shim disconnected" id=8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1 namespace=k8s.io Nov 8 00:24:06.591341 containerd[1465]: time="2025-11-08T00:24:06.591337284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:07.172494 kubelet[2515]: E1108 00:24:07.172442 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:07.463609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8847d8a622c967baab092cb495dfee477aae35adbaac1f78c7b3f525da68f6a1-rootfs.mount: Deactivated successfully. Nov 8 00:24:07.479260 kubelet[2515]: E1108 00:24:07.479207 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:07.481415 containerd[1465]: time="2025-11-08T00:24:07.481382416Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:24:07.517337 containerd[1465]: time="2025-11-08T00:24:07.517286203Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291\"" Nov 8 00:24:07.518542 containerd[1465]: time="2025-11-08T00:24:07.517895735Z" level=info msg="StartContainer for \"6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291\"" Nov 8 00:24:07.562602 systemd[1]: Started cri-containerd-6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291.scope - libcontainer container 6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291. Nov 8 00:24:07.596871 containerd[1465]: time="2025-11-08T00:24:07.596811733Z" level=info msg="StartContainer for \"6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291\" returns successfully" Nov 8 00:24:07.598827 systemd[1]: cri-containerd-6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291.scope: Deactivated successfully. Nov 8 00:24:07.626505 containerd[1465]: time="2025-11-08T00:24:07.626424189Z" level=info msg="shim disconnected" id=6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291 namespace=k8s.io Nov 8 00:24:07.626505 containerd[1465]: time="2025-11-08T00:24:07.626487037Z" level=warning msg="cleaning up after shim disconnected" id=6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291 namespace=k8s.io Nov 8 00:24:07.626505 containerd[1465]: time="2025-11-08T00:24:07.626495883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:08.294319 kubelet[2515]: E1108 00:24:08.294261 2515 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:24:08.464301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b3c190aa2591c6400c1bc6f0d241c6ee1dc42bdabf17f193982cb1b7504d291-rootfs.mount: Deactivated successfully. Nov 8 00:24:08.483219 kubelet[2515]: E1108 00:24:08.483180 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:08.484962 containerd[1465]: time="2025-11-08T00:24:08.484915778Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:24:08.503172 containerd[1465]: time="2025-11-08T00:24:08.503118823Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b\"" Nov 8 00:24:08.503699 containerd[1465]: time="2025-11-08T00:24:08.503646462Z" level=info msg="StartContainer for \"34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b\"" Nov 8 00:24:08.543455 systemd[1]: Started cri-containerd-34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b.scope - libcontainer container 34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b. Nov 8 00:24:08.571125 systemd[1]: cri-containerd-34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b.scope: Deactivated successfully. Nov 8 00:24:08.573472 containerd[1465]: time="2025-11-08T00:24:08.573429444Z" level=info msg="StartContainer for \"34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b\" returns successfully" Nov 8 00:24:08.598998 containerd[1465]: time="2025-11-08T00:24:08.598923147Z" level=info msg="shim disconnected" id=34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b namespace=k8s.io Nov 8 00:24:08.598998 containerd[1465]: time="2025-11-08T00:24:08.598976077Z" level=warning msg="cleaning up after shim disconnected" id=34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b namespace=k8s.io Nov 8 00:24:08.598998 containerd[1465]: time="2025-11-08T00:24:08.598984623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:09.463794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34cba700d4211bf6d95970843ff0597ff33e98eb1c1b823772ad10410605467b-rootfs.mount: Deactivated successfully. Nov 8 00:24:09.486591 kubelet[2515]: E1108 00:24:09.486567 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:09.488219 containerd[1465]: time="2025-11-08T00:24:09.488112371Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:24:09.504564 containerd[1465]: time="2025-11-08T00:24:09.504512956Z" level=info msg="CreateContainer within sandbox \"262ccfe6e8f0f391c4965d401e01c2454eb8420ad9046eeb353c7d085441152a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3e6a0a76ef28eb1568160863e8e885dd40ed090f091dd6e1018cc02a7d99f66\"" Nov 8 00:24:09.505021 containerd[1465]: time="2025-11-08T00:24:09.504990742Z" level=info msg="StartContainer for \"b3e6a0a76ef28eb1568160863e8e885dd40ed090f091dd6e1018cc02a7d99f66\"" Nov 8 00:24:09.540404 systemd[1]: Started cri-containerd-b3e6a0a76ef28eb1568160863e8e885dd40ed090f091dd6e1018cc02a7d99f66.scope - libcontainer container b3e6a0a76ef28eb1568160863e8e885dd40ed090f091dd6e1018cc02a7d99f66. Nov 8 00:24:09.572965 containerd[1465]: time="2025-11-08T00:24:09.572918965Z" level=info msg="StartContainer for \"b3e6a0a76ef28eb1568160863e8e885dd40ed090f091dd6e1018cc02a7d99f66\" returns successfully" Nov 8 00:24:10.030279 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 8 00:24:10.491074 kubelet[2515]: E1108 00:24:10.491041 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:11.172608 kubelet[2515]: E1108 00:24:11.172564 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:11.578322 kubelet[2515]: E1108 00:24:11.578033 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:13.207053 systemd-networkd[1405]: lxc_health: Link UP Nov 8 00:24:13.216905 systemd-networkd[1405]: lxc_health: Gained carrier Nov 8 00:24:13.581190 kubelet[2515]: E1108 00:24:13.579543 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:13.602271 kubelet[2515]: I1108 00:24:13.602174 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-45fqn" podStartSLOduration=8.602156903000001 podStartE2EDuration="8.602156903s" podCreationTimestamp="2025-11-08 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:10.509183085 +0000 UTC m=+87.440601426" watchObservedRunningTime="2025-11-08 00:24:13.602156903 +0000 UTC m=+90.533575244" Nov 8 00:24:14.498966 kubelet[2515]: E1108 00:24:14.498921 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:14.718376 systemd-networkd[1405]: lxc_health: Gained IPv6LL Nov 8 00:24:15.172381 kubelet[2515]: E1108 00:24:15.172331 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:15.501181 kubelet[2515]: E1108 00:24:15.501033 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:19.178289 kubelet[2515]: E1108 00:24:19.178249 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:22.395314 sshd[4374]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:22.399978 systemd[1]: sshd@27-10.0.0.74:22-10.0.0.1:36180.service: Deactivated successfully. Nov 8 00:24:22.402348 systemd[1]: session-28.scope: Deactivated successfully. Nov 8 00:24:22.403188 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Nov 8 00:24:22.404402 systemd-logind[1449]: Removed session 28. Nov 8 00:24:24.172931 kubelet[2515]: E1108 00:24:24.172870 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"