Nov 6 23:56:34.188339 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:10:46 -00 2025 Nov 6 23:56:34.188363 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfca907f387399f05a1f70f0a721c67729758750135d0f481fa9c4c0c2ff9c7e Nov 6 23:56:34.188375 kernel: BIOS-provided physical RAM map: Nov 6 23:56:34.188382 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 23:56:34.188389 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 23:56:34.188395 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 23:56:34.188404 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 6 23:56:34.188410 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 6 23:56:34.188417 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 23:56:34.188426 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 23:56:34.188433 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 23:56:34.188440 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 23:56:34.188446 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 23:56:34.188453 kernel: NX (Execute Disable) protection: active Nov 6 23:56:34.188463 kernel: APIC: Static calls initialized Nov 6 23:56:34.188471 kernel: SMBIOS 2.8 present. Nov 6 23:56:34.188478 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 6 23:56:34.188486 kernel: DMI: Memory slots populated: 1/1 Nov 6 23:56:34.188493 kernel: Hypervisor detected: KVM Nov 6 23:56:34.188500 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 23:56:34.188508 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 23:56:34.188515 kernel: kvm-clock: using sched offset of 3281040902 cycles Nov 6 23:56:34.188523 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 23:56:34.188531 kernel: tsc: Detected 2794.750 MHz processor Nov 6 23:56:34.188541 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:56:34.188549 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:56:34.188557 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 23:56:34.188565 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 23:56:34.188573 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:56:34.188580 kernel: Using GB pages for direct mapping Nov 6 23:56:34.188588 kernel: ACPI: Early table checksum verification disabled Nov 6 23:56:34.188598 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 6 23:56:34.188606 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188614 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188621 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188629 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 6 23:56:34.188637 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188645 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188654 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188662 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:56:34.188674 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 6 23:56:34.188682 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 6 23:56:34.188690 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 6 23:56:34.188699 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 6 23:56:34.188707 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 6 23:56:34.188715 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 6 23:56:34.188723 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 6 23:56:34.188731 kernel: No NUMA configuration found Nov 6 23:56:34.188739 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 6 23:56:34.188747 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 6 23:56:34.188757 kernel: Zone ranges: Nov 6 23:56:34.188765 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:56:34.188773 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 6 23:56:34.188781 kernel: Normal empty Nov 6 23:56:34.188789 kernel: Device empty Nov 6 23:56:34.188797 kernel: Movable zone start for each node Nov 6 23:56:34.188805 kernel: Early memory node ranges Nov 6 23:56:34.188815 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 23:56:34.188822 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 6 23:56:34.188830 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 6 23:56:34.188838 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:56:34.188846 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 23:56:34.188854 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 23:56:34.188862 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 23:56:34.188870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 23:56:34.188880 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:56:34.188888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 23:56:34.188896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 23:56:34.188904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:56:34.188921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 23:56:34.188929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 23:56:34.188936 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:56:34.188946 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 23:56:34.188954 kernel: TSC deadline timer available Nov 6 23:56:34.188963 kernel: CPU topo: Max. logical packages: 1 Nov 6 23:56:34.188971 kernel: CPU topo: Max. logical dies: 1 Nov 6 23:56:34.188979 kernel: CPU topo: Max. dies per package: 1 Nov 6 23:56:34.188987 kernel: CPU topo: Max. threads per core: 1 Nov 6 23:56:34.188994 kernel: CPU topo: Num. cores per package: 4 Nov 6 23:56:34.189004 kernel: CPU topo: Num. threads per package: 4 Nov 6 23:56:34.189012 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 23:56:34.189020 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 23:56:34.189028 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 23:56:34.189036 kernel: kvm-guest: setup PV sched yield Nov 6 23:56:34.189043 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 23:56:34.189051 kernel: Booting paravirtualized kernel on KVM Nov 6 23:56:34.189060 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:56:34.189070 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 23:56:34.189078 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 23:56:34.189086 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 23:56:34.189094 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 23:56:34.189102 kernel: kvm-guest: PV spinlocks enabled Nov 6 23:56:34.189110 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 23:56:34.189119 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfca907f387399f05a1f70f0a721c67729758750135d0f481fa9c4c0c2ff9c7e Nov 6 23:56:34.189129 kernel: random: crng init done Nov 6 23:56:34.189137 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 23:56:34.189145 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:56:34.189153 kernel: Fallback order for Node 0: 0 Nov 6 23:56:34.189161 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 6 23:56:34.189169 kernel: Policy zone: DMA32 Nov 6 23:56:34.189177 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:56:34.189187 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 23:56:34.189195 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 23:56:34.189203 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 23:56:34.189211 kernel: Dynamic Preempt: voluntary Nov 6 23:56:34.189219 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:56:34.189227 kernel: rcu: RCU event tracing is enabled. Nov 6 23:56:34.189236 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 23:56:34.189246 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:56:34.189254 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:56:34.189262 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:56:34.189270 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:56:34.189278 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 23:56:34.189286 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:56:34.189294 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:56:34.189302 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:56:34.189312 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 23:56:34.189332 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:56:34.189347 kernel: Console: colour VGA+ 80x25 Nov 6 23:56:34.189357 kernel: printk: legacy console [ttyS0] enabled Nov 6 23:56:34.189365 kernel: ACPI: Core revision 20240827 Nov 6 23:56:34.189374 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 23:56:34.189382 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:56:34.189390 kernel: x2apic enabled Nov 6 23:56:34.189399 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 23:56:34.189409 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 23:56:34.189417 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 23:56:34.189425 kernel: kvm-guest: setup PV IPIs Nov 6 23:56:34.189434 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 23:56:34.189444 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 6 23:56:34.189452 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 6 23:56:34.189461 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 23:56:34.189469 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 23:56:34.189477 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 23:56:34.189486 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:56:34.189494 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 23:56:34.189504 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:56:34.189512 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 23:56:34.189521 kernel: active return thunk: retbleed_return_thunk Nov 6 23:56:34.189529 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 23:56:34.189537 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 23:56:34.189546 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 23:56:34.189554 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 23:56:34.189565 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 23:56:34.189573 kernel: active return thunk: srso_return_thunk Nov 6 23:56:34.189582 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 23:56:34.189590 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:56:34.189598 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:56:34.189606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:56:34.189616 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:56:34.189625 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 23:56:34.189633 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:56:34.189641 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:56:34.189650 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 23:56:34.189658 kernel: landlock: Up and running. Nov 6 23:56:34.189666 kernel: SELinux: Initializing. Nov 6 23:56:34.189674 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:56:34.189684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:56:34.189693 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 23:56:34.189701 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 23:56:34.189709 kernel: ... version: 0 Nov 6 23:56:34.189717 kernel: ... bit width: 48 Nov 6 23:56:34.189726 kernel: ... generic registers: 6 Nov 6 23:56:34.189734 kernel: ... value mask: 0000ffffffffffff Nov 6 23:56:34.189744 kernel: ... max period: 00007fffffffffff Nov 6 23:56:34.189752 kernel: ... fixed-purpose events: 0 Nov 6 23:56:34.189760 kernel: ... event mask: 000000000000003f Nov 6 23:56:34.189768 kernel: signal: max sigframe size: 1776 Nov 6 23:56:34.189777 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:56:34.189785 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:56:34.189793 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 23:56:34.189803 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:56:34.189811 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:56:34.189819 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 23:56:34.189828 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 23:56:34.189836 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 6 23:56:34.189845 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15956K init, 2088K bss, 114376K reserved, 0K cma-reserved) Nov 6 23:56:34.189853 kernel: devtmpfs: initialized Nov 6 23:56:34.189863 kernel: x86/mm: Memory block size: 128MB Nov 6 23:56:34.189872 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:56:34.189880 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 23:56:34.189888 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:56:34.189897 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:56:34.189913 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:56:34.189922 kernel: audit: type=2000 audit(1762473392.236:1): state=initialized audit_enabled=0 res=1 Nov 6 23:56:34.189932 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:56:34.189940 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:56:34.189949 kernel: cpuidle: using governor menu Nov 6 23:56:34.189957 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:56:34.189965 kernel: dca service started, version 1.12.1 Nov 6 23:56:34.189974 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 23:56:34.189983 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 23:56:34.189993 kernel: PCI: Using configuration type 1 for base access Nov 6 23:56:34.190002 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:56:34.190010 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:56:34.190018 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:56:34.190026 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:56:34.190035 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:56:34.190043 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:56:34.190053 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:56:34.190061 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:56:34.190069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:56:34.190077 kernel: ACPI: Interpreter enabled Nov 6 23:56:34.190086 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 23:56:34.190094 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:56:34.190102 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:56:34.190112 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 23:56:34.190121 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 23:56:34.190129 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:56:34.190360 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:56:34.190542 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 23:56:34.190710 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 23:56:34.190725 kernel: PCI host bridge to bus 0000:00 Nov 6 23:56:34.190891 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 23:56:34.191055 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 23:56:34.191206 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 23:56:34.191393 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 6 23:56:34.191547 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 23:56:34.191700 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 23:56:34.191851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:56:34.192041 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 23:56:34.192214 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 23:56:34.192393 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 23:56:34.192568 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 23:56:34.192732 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 23:56:34.192893 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 23:56:34.193076 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 23:56:34.193243 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 23:56:34.193428 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 23:56:34.193602 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 23:56:34.193774 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 23:56:34.193949 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 6 23:56:34.194114 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 23:56:34.194278 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 23:56:34.194481 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 23:56:34.194655 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 6 23:56:34.194819 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 6 23:56:34.194992 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 6 23:56:34.195155 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 23:56:34.195367 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 23:56:34.195541 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 23:56:34.195741 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 23:56:34.195950 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 6 23:56:34.196118 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 6 23:56:34.196292 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 23:56:34.196472 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 23:56:34.196488 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 23:56:34.196497 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 23:56:34.196505 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 23:56:34.196514 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 23:56:34.196522 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 23:56:34.196530 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 23:56:34.196538 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 23:56:34.196549 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 23:56:34.196557 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 23:56:34.196565 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 23:56:34.196573 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 23:56:34.196582 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 23:56:34.196590 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 23:56:34.196598 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 23:56:34.196608 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 23:56:34.196616 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 23:56:34.196625 kernel: iommu: Default domain type: Translated Nov 6 23:56:34.196633 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:56:34.196641 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:56:34.196650 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 23:56:34.196658 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 23:56:34.196668 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 6 23:56:34.196830 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 23:56:34.197002 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 23:56:34.197167 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 23:56:34.197178 kernel: vgaarb: loaded Nov 6 23:56:34.197187 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 23:56:34.197195 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 23:56:34.197207 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 23:56:34.197215 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:56:34.197224 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:56:34.197232 kernel: pnp: PnP ACPI init Nov 6 23:56:34.197421 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 23:56:34.197434 kernel: pnp: PnP ACPI: found 6 devices Nov 6 23:56:34.197448 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:56:34.197457 kernel: NET: Registered PF_INET protocol family Nov 6 23:56:34.197467 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 23:56:34.197476 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 23:56:34.197484 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:56:34.197493 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:56:34.197501 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 23:56:34.197511 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 23:56:34.197520 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:56:34.197528 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:56:34.197536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:56:34.197545 kernel: NET: Registered PF_XDP protocol family Nov 6 23:56:34.197697 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 23:56:34.197848 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 23:56:34.198016 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 23:56:34.198167 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 6 23:56:34.198331 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 23:56:34.198484 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 23:56:34.198495 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:56:34.198503 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 6 23:56:34.198512 kernel: Initialise system trusted keyrings Nov 6 23:56:34.198524 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 23:56:34.198532 kernel: Key type asymmetric registered Nov 6 23:56:34.198540 kernel: Asymmetric key parser 'x509' registered Nov 6 23:56:34.198549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 23:56:34.198558 kernel: io scheduler mq-deadline registered Nov 6 23:56:34.198566 kernel: io scheduler kyber registered Nov 6 23:56:34.198574 kernel: io scheduler bfq registered Nov 6 23:56:34.198585 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:56:34.198593 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 23:56:34.198602 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 23:56:34.198610 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 23:56:34.198619 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:56:34.198627 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:56:34.198635 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 23:56:34.198645 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 23:56:34.198654 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 23:56:34.198821 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 23:56:34.198833 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 23:56:34.198998 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 23:56:34.199156 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T23:56:32 UTC (1762473392) Nov 6 23:56:34.199327 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 23:56:34.199339 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 23:56:34.199347 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:56:34.199355 kernel: Segment Routing with IPv6 Nov 6 23:56:34.199363 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:56:34.199372 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:56:34.199380 kernel: Key type dns_resolver registered Nov 6 23:56:34.199391 kernel: IPI shorthand broadcast: enabled Nov 6 23:56:34.199399 kernel: sched_clock: Marking stable (1118002735, 225526151)->(1393526772, -49997886) Nov 6 23:56:34.199408 kernel: registered taskstats version 1 Nov 6 23:56:34.199417 kernel: Loading compiled-in X.509 certificates Nov 6 23:56:34.199427 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: a701a154daed2de4fe9459199e7b4f93a1f30f1e' Nov 6 23:56:34.199436 kernel: Demotion targets for Node 0: null Nov 6 23:56:34.199446 kernel: Key type .fscrypt registered Nov 6 23:56:34.199456 kernel: Key type fscrypt-provisioning registered Nov 6 23:56:34.199464 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:56:34.199472 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:56:34.199480 kernel: ima: No architecture policies found Nov 6 23:56:34.199489 kernel: clk: Disabling unused clocks Nov 6 23:56:34.199497 kernel: Freeing unused kernel image (initmem) memory: 15956K Nov 6 23:56:34.199505 kernel: Write protecting the kernel read-only data: 40960k Nov 6 23:56:34.199514 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 23:56:34.199524 kernel: Run /init as init process Nov 6 23:56:34.199533 kernel: with arguments: Nov 6 23:56:34.199541 kernel: /init Nov 6 23:56:34.199556 kernel: with environment: Nov 6 23:56:34.199571 kernel: HOME=/ Nov 6 23:56:34.199587 kernel: TERM=linux Nov 6 23:56:34.199603 kernel: SCSI subsystem initialized Nov 6 23:56:34.199624 kernel: libata version 3.00 loaded. Nov 6 23:56:34.199968 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 23:56:34.200029 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 23:56:34.200387 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 23:56:34.200602 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 23:56:34.200835 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 23:56:34.201040 kernel: scsi host0: ahci Nov 6 23:56:34.201221 kernel: scsi host1: ahci Nov 6 23:56:34.201413 kernel: scsi host2: ahci Nov 6 23:56:34.201589 kernel: scsi host3: ahci Nov 6 23:56:34.201764 kernel: scsi host4: ahci Nov 6 23:56:34.201961 kernel: scsi host5: ahci Nov 6 23:56:34.201974 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 6 23:56:34.201983 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 6 23:56:34.201991 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 6 23:56:34.202000 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 6 23:56:34.202009 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 6 23:56:34.202020 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 6 23:56:34.202029 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 23:56:34.202037 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 23:56:34.202046 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 23:56:34.202054 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 23:56:34.202063 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 23:56:34.202071 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 23:56:34.202080 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 23:56:34.202090 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 23:56:34.202099 kernel: ata3.00: applying bridge limits Nov 6 23:56:34.202108 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 23:56:34.202116 kernel: ata3.00: configured for UDMA/100 Nov 6 23:56:34.202308 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 23:56:34.202501 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 23:56:34.202668 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 23:56:34.202680 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:56:34.202688 kernel: GPT:16515071 != 27000831 Nov 6 23:56:34.202697 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:56:34.202706 kernel: GPT:16515071 != 27000831 Nov 6 23:56:34.202714 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:56:34.202723 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:56:34.202914 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 23:56:34.202927 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 23:56:34.203107 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 23:56:34.203118 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:56:34.203127 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:56:34.203136 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 23:56:34.203148 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 23:56:34.203158 kernel: raid6: avx2x4 gen() 30118 MB/s Nov 6 23:56:34.203167 kernel: raid6: avx2x2 gen() 31255 MB/s Nov 6 23:56:34.203175 kernel: raid6: avx2x1 gen() 25879 MB/s Nov 6 23:56:34.203184 kernel: raid6: using algorithm avx2x2 gen() 31255 MB/s Nov 6 23:56:34.203201 kernel: raid6: .... xor() 19958 MB/s, rmw enabled Nov 6 23:56:34.203210 kernel: raid6: using avx2x2 recovery algorithm Nov 6 23:56:34.203218 kernel: xor: automatically using best checksumming function avx Nov 6 23:56:34.203227 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:56:34.203236 kernel: BTRFS: device fsid e643e10b-d997-4333-8d60-30d1c22703fe devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 6 23:56:34.203245 kernel: BTRFS info (device dm-0): first mount of filesystem e643e10b-d997-4333-8d60-30d1c22703fe Nov 6 23:56:34.203262 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:56:34.203281 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:56:34.203292 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 23:56:34.203301 kernel: loop: module loaded Nov 6 23:56:34.203310 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 23:56:34.203333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:56:34.203343 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:56:34.203358 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:56:34.203367 systemd[1]: Detected virtualization kvm. Nov 6 23:56:34.203376 systemd[1]: Detected architecture x86-64. Nov 6 23:56:34.203385 systemd[1]: Running in initrd. Nov 6 23:56:34.203394 systemd[1]: No hostname configured, using default hostname. Nov 6 23:56:34.203403 systemd[1]: Hostname set to . Nov 6 23:56:34.203415 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 23:56:34.203424 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:56:34.203433 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:56:34.203442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:56:34.203451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:56:34.203461 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:56:34.203470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:56:34.203482 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:56:34.203491 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:56:34.203501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:56:34.203510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:56:34.203519 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 23:56:34.203528 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:56:34.203539 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:56:34.203548 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:56:34.203557 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:56:34.203566 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:56:34.203575 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:56:34.203585 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:56:34.203595 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:56:34.203604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:56:34.203614 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:56:34.203623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:56:34.203632 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:56:34.203641 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:56:34.203650 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:56:34.203661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:56:34.203671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:56:34.203680 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 23:56:34.203689 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:56:34.203698 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:56:34.203707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:56:34.203716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:56:34.203728 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:56:34.203737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:56:34.203747 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:56:34.203758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:56:34.203788 systemd-journald[315]: Collecting audit messages is disabled. Nov 6 23:56:34.203808 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:56:34.203819 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:56:34.203829 systemd-journald[315]: Journal started Nov 6 23:56:34.203847 systemd-journald[315]: Runtime Journal (/run/log/journal/41aa928ba5474a65b349ea6cfd64efb1) is 6M, max 48.3M, 42.2M free. Nov 6 23:56:34.208345 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:56:34.210643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:56:34.216343 kernel: Bridge firewalling registered Nov 6 23:56:34.215490 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 6 23:56:34.215499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:56:34.217623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:56:34.286967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:56:34.289888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:56:34.291622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:56:34.312162 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 23:56:34.314930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:56:34.319689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:56:34.322951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:56:34.326143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:56:34.328567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:56:34.343714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:56:34.369209 dracut-cmdline[362]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfca907f387399f05a1f70f0a721c67729758750135d0f481fa9c4c0c2ff9c7e Nov 6 23:56:34.391912 systemd-resolved[357]: Positive Trust Anchors: Nov 6 23:56:34.391925 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:56:34.391929 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 23:56:34.391960 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:56:34.419873 systemd-resolved[357]: Defaulting to hostname 'linux'. Nov 6 23:56:34.421091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:56:34.425565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:56:34.495347 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:56:34.509349 kernel: iscsi: registered transport (tcp) Nov 6 23:56:34.532407 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:56:34.532440 kernel: QLogic iSCSI HBA Driver Nov 6 23:56:34.558960 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:56:34.585607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:56:34.589275 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:56:34.647782 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:56:34.671936 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:56:34.675827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:56:34.708502 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:56:34.713866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:56:34.747382 systemd-udevd[601]: Using default interface naming scheme 'v257'. Nov 6 23:56:34.799336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:56:34.803298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:56:34.834822 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Nov 6 23:56:34.838508 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:56:34.842256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:56:34.871474 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:56:34.877039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:56:34.900224 systemd-networkd[712]: lo: Link UP Nov 6 23:56:34.900232 systemd-networkd[712]: lo: Gained carrier Nov 6 23:56:34.900873 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:56:34.903286 systemd[1]: Reached target network.target - Network. Nov 6 23:56:34.967940 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:56:35.024445 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:56:35.064090 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 23:56:35.105341 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:56:35.126934 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 23:56:35.130350 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:56:35.130354 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:56:35.131653 systemd-networkd[712]: eth0: Link UP Nov 6 23:56:35.131858 systemd-networkd[712]: eth0: Gained carrier Nov 6 23:56:35.131867 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:56:35.147345 kernel: AES CTR mode by8 optimization enabled Nov 6 23:56:35.150368 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:56:35.157568 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 23:56:35.152048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:56:35.168687 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 23:56:35.189477 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:56:35.197790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:56:35.197882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:56:35.202780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:56:35.205375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:56:35.318093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:56:35.710608 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:56:35.714732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:56:35.718683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:56:35.722402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:56:35.728129 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:56:35.755265 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:56:35.889869 disk-uuid[841]: Primary Header is updated. Nov 6 23:56:35.889869 disk-uuid[841]: Secondary Entries is updated. Nov 6 23:56:35.889869 disk-uuid[841]: Secondary Header is updated. Nov 6 23:56:36.197605 systemd-networkd[712]: eth0: Gained IPv6LL Nov 6 23:56:36.958579 disk-uuid[861]: Warning: The kernel is still using the old partition table. Nov 6 23:56:36.958579 disk-uuid[861]: The new table will be used at the next reboot or after you Nov 6 23:56:36.958579 disk-uuid[861]: run partprobe(8) or kpartx(8) Nov 6 23:56:36.958579 disk-uuid[861]: The operation has completed successfully. Nov 6 23:56:36.969979 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:56:36.970138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:56:36.972207 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:56:37.008638 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Nov 6 23:56:37.008675 kernel: BTRFS info (device vda6): first mount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:56:37.008686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:56:37.013904 kernel: BTRFS info (device vda6): turning on async discard Nov 6 23:56:37.013923 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 23:56:37.022364 kernel: BTRFS info (device vda6): last unmount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:56:37.023480 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:56:37.025580 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:56:37.170857 ignition[890]: Ignition 2.22.0 Nov 6 23:56:37.170872 ignition[890]: Stage: fetch-offline Nov 6 23:56:37.171181 ignition[890]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:56:37.171193 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:56:37.172242 ignition[890]: parsed url from cmdline: "" Nov 6 23:56:37.172247 ignition[890]: no config URL provided Nov 6 23:56:37.172252 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:56:37.172265 ignition[890]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:56:37.173273 ignition[890]: op(1): [started] loading QEMU firmware config module Nov 6 23:56:37.173280 ignition[890]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 23:56:37.185663 ignition[890]: op(1): [finished] loading QEMU firmware config module Nov 6 23:56:37.185710 ignition[890]: QEMU firmware config was not found. Ignoring... Nov 6 23:56:37.265454 ignition[890]: parsing config with SHA512: 07c6cb800d71c8e257924be5d179148ea4a7f6703b2e7b8d779c13bf4740986be75d572f66c40edc6fac9b843601ce7beff044b9a477915d40d23ed5a1945312 Nov 6 23:56:37.268858 unknown[890]: fetched base config from "system" Nov 6 23:56:37.268872 unknown[890]: fetched user config from "qemu" Nov 6 23:56:37.269201 ignition[890]: fetch-offline: fetch-offline passed Nov 6 23:56:37.271960 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:56:37.269249 ignition[890]: Ignition finished successfully Nov 6 23:56:37.274110 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 23:56:37.275001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:56:37.308034 ignition[900]: Ignition 2.22.0 Nov 6 23:56:37.308047 ignition[900]: Stage: kargs Nov 6 23:56:37.308177 ignition[900]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:56:37.308187 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:56:37.309179 ignition[900]: kargs: kargs passed Nov 6 23:56:37.309451 ignition[900]: Ignition finished successfully Nov 6 23:56:37.316191 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:56:37.320608 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:56:37.356910 ignition[907]: Ignition 2.22.0 Nov 6 23:56:37.356924 ignition[907]: Stage: disks Nov 6 23:56:37.357080 ignition[907]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:56:37.357092 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:56:37.358027 ignition[907]: disks: disks passed Nov 6 23:56:37.358080 ignition[907]: Ignition finished successfully Nov 6 23:56:37.375042 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:56:37.376161 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:56:37.379900 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:56:37.383089 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:56:37.386915 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:56:37.390009 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:56:37.394296 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:56:37.443736 systemd-fsck[917]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 23:56:37.513965 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:56:37.515957 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:56:37.662356 kernel: EXT4-fs (vda9): mounted filesystem 9eac1486-40e9-4edf-8a17-71182690c138 r/w with ordered data mode. Quota mode: none. Nov 6 23:56:37.662946 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:56:37.666187 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:56:37.671289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:56:37.675009 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:56:37.678193 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 23:56:37.678257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:56:37.678298 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:56:37.696590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:56:37.700534 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:56:37.709258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (925) Nov 6 23:56:37.709292 kernel: BTRFS info (device vda6): first mount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:56:37.709307 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:56:37.712833 kernel: BTRFS info (device vda6): turning on async discard Nov 6 23:56:37.712902 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 23:56:37.714338 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:56:37.765300 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:56:37.770290 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:56:37.776703 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:56:37.782615 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:56:37.883992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:56:37.886172 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:56:37.888846 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:56:37.910345 kernel: BTRFS info (device vda6): last unmount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:56:37.923468 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:56:37.940356 ignition[1039]: INFO : Ignition 2.22.0 Nov 6 23:56:37.940356 ignition[1039]: INFO : Stage: mount Nov 6 23:56:37.943057 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:56:37.943057 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:56:37.943057 ignition[1039]: INFO : mount: mount passed Nov 6 23:56:37.943057 ignition[1039]: INFO : Ignition finished successfully Nov 6 23:56:37.951422 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:56:37.954767 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:56:37.996823 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:56:37.998799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:56:38.036493 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1051) Nov 6 23:56:38.036537 kernel: BTRFS info (device vda6): first mount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:56:38.036550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:56:38.041613 kernel: BTRFS info (device vda6): turning on async discard Nov 6 23:56:38.041678 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 23:56:38.043364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:56:38.082561 ignition[1068]: INFO : Ignition 2.22.0 Nov 6 23:56:38.082561 ignition[1068]: INFO : Stage: files Nov 6 23:56:38.085515 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:56:38.085515 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:56:38.085515 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:56:38.090981 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:56:38.090981 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:56:38.099063 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:56:38.101523 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:56:38.104151 unknown[1068]: wrote ssh authorized keys file for user: core Nov 6 23:56:38.105882 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:56:38.109287 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:56:38.112721 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 23:56:38.154673 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:56:38.236748 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:56:38.239847 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:56:38.239847 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:56:38.496870 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:56:38.568390 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:56:38.568390 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:56:38.575163 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:56:38.607931 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:56:38.607931 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:56:38.607931 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 23:56:38.948055 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:56:39.316593 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:56:39.316593 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 23:56:39.322983 ignition[1068]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 23:56:39.350549 ignition[1068]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:56:39.355352 ignition[1068]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:56:39.358059 ignition[1068]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 23:56:39.358059 ignition[1068]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:56:39.358059 ignition[1068]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:56:39.358059 ignition[1068]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:56:39.358059 ignition[1068]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:56:39.358059 ignition[1068]: INFO : files: files passed Nov 6 23:56:39.358059 ignition[1068]: INFO : Ignition finished successfully Nov 6 23:56:39.373790 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:56:39.384893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:56:39.388681 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:56:39.410465 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:56:39.410599 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:56:39.415540 initrd-setup-root-after-ignition[1099]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 23:56:39.417880 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:56:39.417880 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:56:39.423016 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:56:39.423253 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:56:39.429449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:56:39.431009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:56:39.476349 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:56:39.476534 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:56:39.478632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:56:39.482912 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:56:39.488790 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:56:39.491485 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:56:39.526992 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:56:39.529196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:56:39.557930 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:56:39.558107 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:56:39.559081 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:56:39.559899 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:56:39.567788 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:56:39.567910 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:56:39.572943 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:56:39.576285 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:56:39.577158 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:56:39.581254 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:56:39.584915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:56:39.588216 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 23:56:39.591823 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:56:39.594907 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:56:39.595742 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:56:39.601936 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:56:39.604727 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:56:39.607723 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:56:39.607861 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:56:39.612691 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:56:39.613828 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:56:39.617983 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:56:39.622438 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:56:39.623099 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:56:39.623217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:56:39.629674 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:56:39.629843 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:56:39.630915 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:56:39.635080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:56:39.641429 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:56:39.642173 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:56:39.646347 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:56:39.649057 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:56:39.649152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:56:39.651859 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:56:39.651945 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:56:39.654903 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:56:39.655036 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:56:39.657844 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:56:39.657950 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:56:39.665221 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:56:39.669217 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:56:39.670273 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:56:39.670447 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:56:39.675088 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:56:39.675229 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:56:39.677054 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:56:39.677184 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:56:39.692249 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:56:39.703577 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:56:39.727787 ignition[1126]: INFO : Ignition 2.22.0 Nov 6 23:56:39.727787 ignition[1126]: INFO : Stage: umount Nov 6 23:56:39.730389 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:56:39.730389 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:56:39.730389 ignition[1126]: INFO : umount: umount passed Nov 6 23:56:39.730389 ignition[1126]: INFO : Ignition finished successfully Nov 6 23:56:39.735994 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:56:39.736628 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:56:39.736754 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:56:39.738420 systemd[1]: Stopped target network.target - Network. Nov 6 23:56:39.740721 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:56:39.740793 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:56:39.743260 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:56:39.743334 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:56:39.746245 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:56:39.746298 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:56:39.749053 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:56:39.749101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:56:39.753048 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:56:39.757101 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:56:39.772755 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:56:39.772912 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:56:39.779595 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:56:39.779781 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:56:39.787533 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 23:56:39.791212 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:56:39.791296 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:56:39.797390 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:56:39.798061 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:56:39.798141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:56:39.806021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:56:39.806090 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:56:39.809240 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:56:39.809299 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:56:39.812704 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:56:39.824757 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:56:39.824911 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:56:39.826265 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:56:39.826346 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:56:39.838149 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:56:39.849704 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:56:39.854252 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:56:39.854313 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:56:39.857610 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:56:39.857652 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:56:39.858757 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:56:39.858822 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:56:39.867494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:56:39.867581 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:56:39.870698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:56:39.870784 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:56:39.876434 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:56:39.880661 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 23:56:39.882500 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:56:39.888658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:56:39.888791 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:56:39.889909 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 23:56:39.889968 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:56:39.890836 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:56:39.890879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:56:39.899078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:56:39.899171 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:56:39.900585 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:56:39.900702 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:56:39.908303 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:56:39.908455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:56:39.915442 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:56:39.917032 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:56:39.940541 systemd[1]: Switching root. Nov 6 23:56:39.989956 systemd-journald[315]: Journal stopped Nov 6 23:56:41.797893 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 6 23:56:41.797974 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:56:41.797993 kernel: SELinux: policy capability open_perms=1 Nov 6 23:56:41.798012 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:56:41.798033 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:56:41.798049 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:56:41.798065 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:56:41.798086 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:56:41.798104 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:56:41.798120 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 23:56:41.798136 kernel: audit: type=1403 audit(1762473400.907:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:56:41.798153 systemd[1]: Successfully loaded SELinux policy in 132.805ms. Nov 6 23:56:41.798177 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.430ms. Nov 6 23:56:41.798195 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:56:41.798212 systemd[1]: Detected virtualization kvm. Nov 6 23:56:41.798231 systemd[1]: Detected architecture x86-64. Nov 6 23:56:41.798247 systemd[1]: Detected first boot. Nov 6 23:56:41.798264 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 23:56:41.798281 zram_generator::config[1172]: No configuration found. Nov 6 23:56:41.798299 kernel: Guest personality initialized and is inactive Nov 6 23:56:41.798329 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 23:56:41.798345 kernel: Initialized host personality Nov 6 23:56:41.798364 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:56:41.798380 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:56:41.798398 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:56:41.798417 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:56:41.798439 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:56:41.798457 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:56:41.798473 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:56:41.798493 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:56:41.798509 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:56:41.798526 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:56:41.798543 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:56:41.798560 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:56:41.798576 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:56:41.798593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:56:41.798612 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:56:41.798630 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:56:41.798647 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:56:41.798672 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:56:41.798690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:56:41.798707 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:56:41.798726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:56:41.798745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:56:41.798764 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:56:41.798781 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:56:41.798799 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:56:41.798815 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:56:41.798832 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:56:41.798852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:56:41.798868 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:56:41.798885 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:56:41.798902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:56:41.798919 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:56:41.798936 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:56:41.798953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:56:41.798972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:56:41.798989 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:56:41.799005 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:56:41.799023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:56:41.799040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:56:41.799057 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:56:41.799073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:41.799092 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:56:41.799108 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:56:41.799125 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:56:41.799143 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:56:41.799160 systemd[1]: Reached target machines.target - Containers. Nov 6 23:56:41.799176 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:56:41.799193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:56:41.799212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:56:41.799228 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:56:41.799245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:56:41.799262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:56:41.799279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:56:41.799296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:56:41.799312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:56:41.799344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:56:41.799361 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:56:41.799377 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:56:41.799394 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:56:41.799410 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:56:41.799428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:56:41.799447 kernel: fuse: init (API version 7.41) Nov 6 23:56:41.799463 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:56:41.799481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:56:41.799497 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:56:41.799514 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:56:41.799531 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:56:41.799548 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:56:41.799568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:41.799611 systemd-journald[1250]: Collecting audit messages is disabled. Nov 6 23:56:41.799640 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:56:41.799668 systemd-journald[1250]: Journal started Nov 6 23:56:41.799698 systemd-journald[1250]: Runtime Journal (/run/log/journal/41aa928ba5474a65b349ea6cfd64efb1) is 6M, max 48.3M, 42.2M free. Nov 6 23:56:41.489229 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:56:41.503274 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 23:56:41.503805 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:56:41.802405 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:56:41.803953 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:56:41.806844 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:56:41.808717 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:56:41.810697 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:56:41.812673 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:56:41.814709 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:56:41.817541 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:56:41.822932 kernel: ACPI: bus type drm_connector registered Nov 6 23:56:41.822023 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:56:41.822289 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:56:41.824618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:56:41.824877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:56:41.827870 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:56:41.828236 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:56:41.830248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:56:41.830587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:56:41.832837 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:56:41.833052 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:56:41.835058 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:56:41.835270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:56:41.837307 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:56:41.839500 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:56:41.842718 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:56:41.845346 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:56:41.859467 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:56:41.861981 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 23:56:41.865145 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:56:41.868277 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:56:41.870225 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:56:41.870257 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:56:41.872806 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:56:41.874919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:56:41.882951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:56:41.886953 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:56:41.888991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:56:41.891462 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:56:41.893238 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:56:41.894435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:56:41.901583 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:56:41.905754 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:56:41.910102 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:56:41.912526 systemd-journald[1250]: Time spent on flushing to /var/log/journal/41aa928ba5474a65b349ea6cfd64efb1 is 28.124ms for 972 entries. Nov 6 23:56:41.912526 systemd-journald[1250]: System Journal (/var/log/journal/41aa928ba5474a65b349ea6cfd64efb1) is 8M, max 163.5M, 155.5M free. Nov 6 23:56:41.949499 systemd-journald[1250]: Received client request to flush runtime journal. Nov 6 23:56:41.949549 kernel: loop1: detected capacity change from 0 to 110984 Nov 6 23:56:41.914304 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:56:41.916558 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:56:41.919045 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:56:41.924643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:56:41.929440 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:56:41.931792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:56:41.950882 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Nov 6 23:56:41.950896 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Nov 6 23:56:41.956279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:56:41.967987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:56:41.973942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:56:41.982355 kernel: loop2: detected capacity change from 0 to 128048 Nov 6 23:56:41.990498 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:56:42.010808 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:56:42.014743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:56:42.017284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:56:42.023342 kernel: loop3: detected capacity change from 0 to 219144 Nov 6 23:56:42.036462 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:56:42.046350 kernel: loop4: detected capacity change from 0 to 110984 Nov 6 23:56:42.051276 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 6 23:56:42.051301 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 6 23:56:42.056108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:56:42.058372 kernel: loop5: detected capacity change from 0 to 128048 Nov 6 23:56:42.068390 kernel: loop6: detected capacity change from 0 to 219144 Nov 6 23:56:42.074133 (sd-merge)[1317]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 6 23:56:42.078594 (sd-merge)[1317]: Merged extensions into '/usr'. Nov 6 23:56:42.122207 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:56:42.122229 systemd[1]: Reloading... Nov 6 23:56:42.178343 zram_generator::config[1351]: No configuration found. Nov 6 23:56:42.211740 systemd-resolved[1312]: Positive Trust Anchors: Nov 6 23:56:42.211756 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:56:42.211761 systemd-resolved[1312]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 23:56:42.211791 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:56:42.215780 systemd-resolved[1312]: Defaulting to hostname 'linux'. Nov 6 23:56:42.368331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:56:42.368531 systemd[1]: Reloading finished in 245 ms. Nov 6 23:56:42.392756 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:56:42.394786 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:56:42.396804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:56:42.400943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:56:42.424041 systemd[1]: Starting ensure-sysext.service... Nov 6 23:56:42.426540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:56:42.446237 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 23:56:42.446274 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 23:56:42.446675 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:56:42.447027 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:56:42.448279 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:56:42.448722 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 6 23:56:42.448815 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 6 23:56:42.453718 systemd[1]: Reload requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:56:42.453737 systemd[1]: Reloading... Nov 6 23:56:42.454912 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:56:42.454924 systemd-tmpfiles[1389]: Skipping /boot Nov 6 23:56:42.465398 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:56:42.465411 systemd-tmpfiles[1389]: Skipping /boot Nov 6 23:56:42.509402 zram_generator::config[1419]: No configuration found. Nov 6 23:56:42.682296 systemd[1]: Reloading finished in 228 ms. Nov 6 23:56:42.707861 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:56:42.726438 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:56:42.737186 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:56:42.739893 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:56:42.751499 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:56:42.756534 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:56:42.765388 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:56:42.774558 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:56:42.779195 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:42.779402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:56:42.780816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:56:42.791423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:56:42.798671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:56:42.800749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:56:42.801130 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:56:42.801225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:42.802578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:56:42.802826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:56:42.805828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:56:42.806064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:56:42.814666 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:56:42.817848 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:56:42.818124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:56:42.832223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:56:42.837542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:42.837775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:56:42.839136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:56:42.841990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:56:42.844898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:56:42.846594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:56:42.846753 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:56:42.846879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:42.847580 systemd-udevd[1466]: Using default interface naming scheme 'v257'. Nov 6 23:56:42.851236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:42.851829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:56:42.856758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:56:42.858511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:56:42.858608 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:56:42.858737 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:56:42.859842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:56:42.860075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:56:42.862363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:56:42.862565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:56:42.865240 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:56:42.865576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:56:42.867818 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:56:42.868012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:56:42.873042 systemd[1]: Finished ensure-sysext.service. Nov 6 23:56:42.880453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:56:42.880509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:56:42.882260 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:56:42.938750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:56:42.969996 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:56:42.983588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:56:42.988004 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:56:42.988228 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:56:42.991128 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:56:43.020872 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:56:43.037354 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 23:56:43.043390 kernel: ACPI: button: Power Button [PWRF] Nov 6 23:56:43.043424 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:56:43.067519 systemd-networkd[1522]: lo: Link UP Nov 6 23:56:43.067531 systemd-networkd[1522]: lo: Gained carrier Nov 6 23:56:43.069023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:56:43.081482 systemd-networkd[1522]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:56:43.081495 systemd-networkd[1522]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:56:43.089739 systemd-networkd[1522]: eth0: Link UP Nov 6 23:56:43.089970 systemd-networkd[1522]: eth0: Gained carrier Nov 6 23:56:43.089992 systemd-networkd[1522]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:56:43.094883 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:56:43.098445 augenrules[1535]: No rules Nov 6 23:56:43.103897 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 23:56:43.104216 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 23:56:43.104432 systemd-networkd[1522]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:56:43.106398 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Nov 6 23:56:44.092223 systemd-resolved[1312]: Clock change detected. Flushing caches. Nov 6 23:56:44.092327 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 23:56:44.092378 systemd-timesyncd[1497]: Initial clock synchronization to Thu 2025-11-06 23:56:44.092172 UTC. Nov 6 23:56:44.094117 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:56:44.094415 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:56:44.101570 systemd[1]: Reached target network.target - Network. Nov 6 23:56:44.119305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:56:44.123990 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:56:44.129307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:56:44.235326 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:56:44.238075 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:56:44.265224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:56:44.270450 kernel: kvm_amd: TSC scaling supported Nov 6 23:56:44.270482 kernel: kvm_amd: Nested Virtualization enabled Nov 6 23:56:44.270496 kernel: kvm_amd: Nested Paging enabled Nov 6 23:56:44.272180 kernel: kvm_amd: LBR virtualization supported Nov 6 23:56:44.272202 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 23:56:44.273891 kernel: kvm_amd: Virtual GIF supported Nov 6 23:56:44.299145 kernel: EDAC MC: Ver: 3.0.0 Nov 6 23:56:44.410409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:56:44.432980 ldconfig[1460]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:56:44.664744 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:56:44.668349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:56:44.703782 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:56:44.705892 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:56:44.707783 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:56:44.709801 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:56:44.711852 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 23:56:44.713928 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:56:44.715783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:56:44.717817 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:56:44.719853 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:56:44.719884 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:56:44.721552 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:56:44.723938 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:56:44.727379 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:56:44.731091 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:56:44.749219 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:56:44.751237 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:56:44.756163 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:56:44.758090 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:56:44.760528 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:56:44.762881 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:56:44.764439 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:56:44.766067 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:56:44.766094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:56:44.767046 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:56:44.769782 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:56:44.772554 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:56:44.775749 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:56:44.778524 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:56:44.781266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:56:44.782717 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 23:56:44.784898 jq[1579]: false Nov 6 23:56:44.785791 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:56:44.790208 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:56:44.795470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:56:44.796055 extend-filesystems[1580]: Found /dev/vda6 Nov 6 23:56:44.800400 extend-filesystems[1580]: Found /dev/vda9 Nov 6 23:56:44.796361 oslogin_cache_refresh[1581]: Refreshing passwd entry cache Nov 6 23:56:44.807926 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing passwd entry cache Nov 6 23:56:44.800670 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:56:44.808144 extend-filesystems[1580]: Checking size of /dev/vda9 Nov 6 23:56:44.812770 oslogin_cache_refresh[1581]: Failure getting users, quitting Nov 6 23:56:44.816649 extend-filesystems[1580]: Resized partition /dev/vda9 Nov 6 23:56:44.818383 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting users, quitting Nov 6 23:56:44.818383 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 23:56:44.818383 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing group entry cache Nov 6 23:56:44.809764 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:56:44.812793 oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 23:56:44.818680 extend-filesystems[1604]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 23:56:44.835835 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 23:56:44.811581 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:56:44.812849 oslogin_cache_refresh[1581]: Refreshing group entry cache Nov 6 23:56:44.845383 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting groups, quitting Nov 6 23:56:44.845383 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 23:56:44.812159 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:56:44.829025 oslogin_cache_refresh[1581]: Failure getting groups, quitting Nov 6 23:56:44.813771 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:56:44.829039 oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 23:56:44.818929 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:56:44.845795 jq[1605]: true Nov 6 23:56:44.824794 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:56:44.830917 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:56:44.831205 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:56:44.831547 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 23:56:44.831803 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 23:56:44.852038 update_engine[1602]: I20251106 23:56:44.848522 1602 main.cc:92] Flatcar Update Engine starting Nov 6 23:56:44.840814 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:56:44.841066 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:56:44.846742 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:56:44.846980 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:56:44.864007 jq[1616]: true Nov 6 23:56:44.877433 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:56:44.944964 systemd-logind[1598]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 23:56:44.944990 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:56:44.945541 systemd-logind[1598]: New seat seat0. Nov 6 23:56:44.947465 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:56:44.957161 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 6 23:56:44.961448 tar[1613]: linux-amd64/LICENSE Nov 6 23:56:44.989208 dbus-daemon[1577]: [system] SELinux support is enabled Nov 6 23:56:44.989470 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:56:45.171715 update_engine[1602]: I20251106 23:56:44.994562 1602 update_check_scheduler.cc:74] Next update check in 8m46s Nov 6 23:56:45.171750 tar[1613]: linux-amd64/helm Nov 6 23:56:45.171772 containerd[1626]: time="2025-11-06T23:56:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 23:56:44.996389 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 23:56:44.995168 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:56:45.172090 containerd[1626]: time="2025-11-06T23:56:45.172015180Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 23:56:44.995196 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:56:44.997289 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:56:44.997303 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:56:44.999320 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:56:45.002893 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:56:45.052226 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:56:45.174695 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:56:45.174877 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 23:56:45.174877 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 23:56:45.174877 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 6 23:56:45.185203 extend-filesystems[1580]: Resized filesystem in /dev/vda9 Nov 6 23:56:45.186735 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:56:45.176092 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:56:45.176417 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:56:45.180002 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:56:45.182621 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 23:56:45.187610 containerd[1626]: time="2025-11-06T23:56:45.187571939Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.459µs" Nov 6 23:56:45.187610 containerd[1626]: time="2025-11-06T23:56:45.187603338Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 23:56:45.187656 containerd[1626]: time="2025-11-06T23:56:45.187621372Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 23:56:45.187810 containerd[1626]: time="2025-11-06T23:56:45.187789718Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 23:56:45.187810 containerd[1626]: time="2025-11-06T23:56:45.187805708Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 23:56:45.187847 containerd[1626]: time="2025-11-06T23:56:45.187828310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 23:56:45.187901 containerd[1626]: time="2025-11-06T23:56:45.187886980Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 23:56:45.187920 containerd[1626]: time="2025-11-06T23:56:45.187898381Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188146 containerd[1626]: time="2025-11-06T23:56:45.188113695Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188170 containerd[1626]: time="2025-11-06T23:56:45.188145615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188170 containerd[1626]: time="2025-11-06T23:56:45.188156265Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188170 containerd[1626]: time="2025-11-06T23:56:45.188163889Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188259 containerd[1626]: time="2025-11-06T23:56:45.188245723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188480 containerd[1626]: time="2025-11-06T23:56:45.188458061Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188507 containerd[1626]: time="2025-11-06T23:56:45.188489019Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 23:56:45.188507 containerd[1626]: time="2025-11-06T23:56:45.188498276Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 23:56:45.188542 containerd[1626]: time="2025-11-06T23:56:45.188526709Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 23:56:45.188716 containerd[1626]: time="2025-11-06T23:56:45.188701207Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 23:56:45.188771 containerd[1626]: time="2025-11-06T23:56:45.188759476Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:56:45.194806 containerd[1626]: time="2025-11-06T23:56:45.194772861Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 23:56:45.194848 containerd[1626]: time="2025-11-06T23:56:45.194822795Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 23:56:45.194848 containerd[1626]: time="2025-11-06T23:56:45.194839516Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 23:56:45.194898 containerd[1626]: time="2025-11-06T23:56:45.194850557Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 23:56:45.194898 containerd[1626]: time="2025-11-06T23:56:45.194861297Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 23:56:45.194898 containerd[1626]: time="2025-11-06T23:56:45.194870364Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 23:56:45.194898 containerd[1626]: time="2025-11-06T23:56:45.194881645Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 23:56:45.194898 containerd[1626]: time="2025-11-06T23:56:45.194891333Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 23:56:45.194987 containerd[1626]: time="2025-11-06T23:56:45.194900550Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 23:56:45.194987 containerd[1626]: time="2025-11-06T23:56:45.194910128Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 23:56:45.194987 containerd[1626]: time="2025-11-06T23:56:45.194917963Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 23:56:45.194987 containerd[1626]: time="2025-11-06T23:56:45.194930366Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 23:56:45.195075 containerd[1626]: time="2025-11-06T23:56:45.195037728Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 23:56:45.195075 containerd[1626]: time="2025-11-06T23:56:45.195055110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 23:56:45.195075 containerd[1626]: time="2025-11-06T23:56:45.195068505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195084014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195096167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195107799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195140180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195150429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195163513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 23:56:45.195180 containerd[1626]: time="2025-11-06T23:56:45.195173723Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 23:56:45.195476 containerd[1626]: time="2025-11-06T23:56:45.195184433Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 23:56:45.195476 containerd[1626]: time="2025-11-06T23:56:45.195252260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 23:56:45.195476 containerd[1626]: time="2025-11-06T23:56:45.195265014Z" level=info msg="Start snapshots syncer" Nov 6 23:56:45.195476 containerd[1626]: time="2025-11-06T23:56:45.195291584Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 23:56:45.195550 containerd[1626]: time="2025-11-06T23:56:45.195516565Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 23:56:45.195636 containerd[1626]: time="2025-11-06T23:56:45.195560247Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 23:56:45.195636 containerd[1626]: time="2025-11-06T23:56:45.195620430Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 23:56:45.195738 containerd[1626]: time="2025-11-06T23:56:45.195722381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 23:56:45.195809 containerd[1626]: time="2025-11-06T23:56:45.195746176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 23:56:45.195809 containerd[1626]: time="2025-11-06T23:56:45.195755473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 23:56:45.195809 containerd[1626]: time="2025-11-06T23:56:45.195767817Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 23:56:45.195809 containerd[1626]: time="2025-11-06T23:56:45.195777725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 23:56:45.195809 containerd[1626]: time="2025-11-06T23:56:45.195787614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 23:56:45.195809 containerd[1626]: time="2025-11-06T23:56:45.195797041Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195816277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195826156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195835934Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195868606Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195881139Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195888944Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195897600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195904673Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195916545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195926194Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195942053Z" level=info msg="runtime interface created" Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195947193Z" level=info msg="created NRI interface" Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195954897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195963994Z" level=info msg="Connect containerd service" Nov 6 23:56:45.195982 containerd[1626]: time="2025-11-06T23:56:45.195988731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:56:45.196710 containerd[1626]: time="2025-11-06T23:56:45.196684014Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:56:45.201761 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:56:45.205894 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:56:45.226436 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:56:45.226801 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:56:45.231225 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:56:45.253914 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:56:45.260606 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:56:45.266505 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 23:56:45.268537 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:56:45.375269 systemd-networkd[1522]: eth0: Gained IPv6LL Nov 6 23:56:45.382686 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:56:45.389294 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:56:45.393038 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 23:56:45.397968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:56:45.402112 containerd[1626]: time="2025-11-06T23:56:45.402067899Z" level=info msg="Start subscribing containerd event" Nov 6 23:56:45.402236 containerd[1626]: time="2025-11-06T23:56:45.402184588Z" level=info msg="Start recovering state" Nov 6 23:56:45.402317 containerd[1626]: time="2025-11-06T23:56:45.402296899Z" level=info msg="Start event monitor" Nov 6 23:56:45.402374 containerd[1626]: time="2025-11-06T23:56:45.402326815Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:56:45.402374 containerd[1626]: time="2025-11-06T23:56:45.402335932Z" level=info msg="Start streaming server" Nov 6 23:56:45.402374 containerd[1626]: time="2025-11-06T23:56:45.402349728Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 23:56:45.402374 containerd[1626]: time="2025-11-06T23:56:45.402356981Z" level=info msg="runtime interface starting up..." Nov 6 23:56:45.402374 containerd[1626]: time="2025-11-06T23:56:45.402363023Z" level=info msg="starting plugins..." Nov 6 23:56:45.402374 containerd[1626]: time="2025-11-06T23:56:45.402376728Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 23:56:45.402727 containerd[1626]: time="2025-11-06T23:56:45.402699173Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:56:45.402798 containerd[1626]: time="2025-11-06T23:56:45.402772140Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:56:45.403853 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:56:45.408693 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:56:45.417813 containerd[1626]: time="2025-11-06T23:56:45.408218712Z" level=info msg="containerd successfully booted in 0.237536s" Nov 6 23:56:45.509888 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 23:56:45.510181 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 23:56:45.512899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:56:45.516229 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:56:45.863742 tar[1613]: linux-amd64/README.md Nov 6 23:56:45.886549 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:56:46.818545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:56:46.821104 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:56:46.822706 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:56:46.824086 systemd[1]: Startup finished in 2.412s (kernel) + 6.966s (initrd) + 5.062s (userspace) = 14.440s. Nov 6 23:56:46.989240 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:56:46.991047 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:53542.service - OpenSSH per-connection server daemon (10.0.0.1:53542). Nov 6 23:56:47.065805 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 53542 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:47.068275 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:47.075713 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:56:47.076829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:56:47.130402 systemd-logind[1598]: New session 1 of user core. Nov 6 23:56:47.147052 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:56:47.150167 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:56:47.166624 (systemd)[1732]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:56:47.169570 systemd-logind[1598]: New session c1 of user core. Nov 6 23:56:47.345317 systemd[1732]: Queued start job for default target default.target. Nov 6 23:56:47.361753 systemd[1732]: Created slice app.slice - User Application Slice. Nov 6 23:56:47.361781 systemd[1732]: Reached target paths.target - Paths. Nov 6 23:56:47.361823 systemd[1732]: Reached target timers.target - Timers. Nov 6 23:56:47.363566 systemd[1732]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:56:47.377730 systemd[1732]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:56:47.377850 systemd[1732]: Reached target sockets.target - Sockets. Nov 6 23:56:47.377887 systemd[1732]: Reached target basic.target - Basic System. Nov 6 23:56:47.377926 systemd[1732]: Reached target default.target - Main User Target. Nov 6 23:56:47.377957 systemd[1732]: Startup finished in 200ms. Nov 6 23:56:47.378285 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:56:47.380547 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:56:47.421593 kubelet[1718]: E1106 23:56:47.421522 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:56:47.425302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:56:47.425510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:56:47.425887 systemd[1]: kubelet.service: Consumed 1.759s CPU time, 258M memory peak. Nov 6 23:56:47.441342 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:53552.service - OpenSSH per-connection server daemon (10.0.0.1:53552). Nov 6 23:56:47.503983 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 53552 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:47.505271 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:47.509611 systemd-logind[1598]: New session 2 of user core. Nov 6 23:56:47.525247 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:56:47.578921 sshd[1750]: Connection closed by 10.0.0.1 port 53552 Nov 6 23:56:47.579278 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Nov 6 23:56:47.592566 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:53552.service: Deactivated successfully. Nov 6 23:56:47.594370 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:56:47.595074 systemd-logind[1598]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:56:47.597609 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:53560.service - OpenSSH per-connection server daemon (10.0.0.1:53560). Nov 6 23:56:47.598321 systemd-logind[1598]: Removed session 2. Nov 6 23:56:47.650696 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 53560 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:47.651824 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:47.656659 systemd-logind[1598]: New session 3 of user core. Nov 6 23:56:47.671277 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:56:47.722410 sshd[1759]: Connection closed by 10.0.0.1 port 53560 Nov 6 23:56:47.722819 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 6 23:56:47.731776 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:53560.service: Deactivated successfully. Nov 6 23:56:47.733437 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:56:47.734141 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:56:47.736719 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:53566.service - OpenSSH per-connection server daemon (10.0.0.1:53566). Nov 6 23:56:47.737295 systemd-logind[1598]: Removed session 3. Nov 6 23:56:47.798068 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 53566 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:47.799877 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:47.804714 systemd-logind[1598]: New session 4 of user core. Nov 6 23:56:47.820327 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:56:47.875940 sshd[1769]: Connection closed by 10.0.0.1 port 53566 Nov 6 23:56:47.876374 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 6 23:56:47.887838 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:53566.service: Deactivated successfully. Nov 6 23:56:47.890417 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:56:47.891268 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:56:47.894510 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:53574.service - OpenSSH per-connection server daemon (10.0.0.1:53574). Nov 6 23:56:47.895557 systemd-logind[1598]: Removed session 4. Nov 6 23:56:47.950640 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 53574 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:47.952351 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:47.956879 systemd-logind[1598]: New session 5 of user core. Nov 6 23:56:47.971393 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:56:48.036478 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:56:48.036882 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:56:48.056406 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 6 23:56:48.058595 sshd[1778]: Connection closed by 10.0.0.1 port 53574 Nov 6 23:56:48.059001 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 6 23:56:48.073378 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:53574.service: Deactivated successfully. Nov 6 23:56:48.075559 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:56:48.076522 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:56:48.079403 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:53582.service - OpenSSH per-connection server daemon (10.0.0.1:53582). Nov 6 23:56:48.080176 systemd-logind[1598]: Removed session 5. Nov 6 23:56:48.132726 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 53582 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:48.134354 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:48.139438 systemd-logind[1598]: New session 6 of user core. Nov 6 23:56:48.157400 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:56:48.214207 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:56:48.214600 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:56:48.234336 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 6 23:56:48.242050 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:56:48.242371 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:56:48.253595 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:56:48.309462 augenrules[1812]: No rules Nov 6 23:56:48.310746 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:56:48.311105 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:56:48.312693 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 6 23:56:48.315249 sshd[1788]: Connection closed by 10.0.0.1 port 53582 Nov 6 23:56:48.315602 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 6 23:56:48.330468 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:53582.service: Deactivated successfully. Nov 6 23:56:48.332867 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:56:48.333741 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:56:48.337525 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:53592.service - OpenSSH per-connection server daemon (10.0.0.1:53592). Nov 6 23:56:48.338261 systemd-logind[1598]: Removed session 6. Nov 6 23:56:48.396261 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 53592 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:56:48.397726 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:56:48.402381 systemd-logind[1598]: New session 7 of user core. Nov 6 23:56:48.414253 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:56:48.470666 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:56:48.471142 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:56:49.133817 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:56:49.152454 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:56:49.578070 dockerd[1845]: time="2025-11-06T23:56:49.577932624Z" level=info msg="Starting up" Nov 6 23:56:49.578863 dockerd[1845]: time="2025-11-06T23:56:49.578825148Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 23:56:49.600271 dockerd[1845]: time="2025-11-06T23:56:49.600211467Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 23:56:50.222311 dockerd[1845]: time="2025-11-06T23:56:50.222251121Z" level=info msg="Loading containers: start." Nov 6 23:56:50.234158 kernel: Initializing XFRM netlink socket Nov 6 23:56:50.523867 systemd-networkd[1522]: docker0: Link UP Nov 6 23:56:50.530996 dockerd[1845]: time="2025-11-06T23:56:50.530932268Z" level=info msg="Loading containers: done." Nov 6 23:56:50.551042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3067113757-merged.mount: Deactivated successfully. Nov 6 23:56:50.553627 dockerd[1845]: time="2025-11-06T23:56:50.553567880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:56:50.553713 dockerd[1845]: time="2025-11-06T23:56:50.553693235Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 23:56:50.553863 dockerd[1845]: time="2025-11-06T23:56:50.553837165Z" level=info msg="Initializing buildkit" Nov 6 23:56:50.590444 dockerd[1845]: time="2025-11-06T23:56:50.590392647Z" level=info msg="Completed buildkit initialization" Nov 6 23:56:50.598690 dockerd[1845]: time="2025-11-06T23:56:50.598626415Z" level=info msg="Daemon has completed initialization" Nov 6 23:56:50.598849 dockerd[1845]: time="2025-11-06T23:56:50.598711825Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:56:50.599013 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:56:51.564421 containerd[1626]: time="2025-11-06T23:56:51.564372769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 23:56:52.287143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699465551.mount: Deactivated successfully. Nov 6 23:56:53.460907 containerd[1626]: time="2025-11-06T23:56:53.460834204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:53.461558 containerd[1626]: time="2025-11-06T23:56:53.461495233Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 6 23:56:53.462654 containerd[1626]: time="2025-11-06T23:56:53.462630342Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:53.465381 containerd[1626]: time="2025-11-06T23:56:53.465312891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:53.466147 containerd[1626]: time="2025-11-06T23:56:53.466092473Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.901676243s" Nov 6 23:56:53.466199 containerd[1626]: time="2025-11-06T23:56:53.466155091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 23:56:53.466891 containerd[1626]: time="2025-11-06T23:56:53.466865703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 23:56:55.026435 containerd[1626]: time="2025-11-06T23:56:55.026374451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:55.027174 containerd[1626]: time="2025-11-06T23:56:55.027151599Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 6 23:56:55.028389 containerd[1626]: time="2025-11-06T23:56:55.028360405Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:55.030936 containerd[1626]: time="2025-11-06T23:56:55.030906108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:55.031721 containerd[1626]: time="2025-11-06T23:56:55.031689878Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.564796053s" Nov 6 23:56:55.031721 containerd[1626]: time="2025-11-06T23:56:55.031719563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 23:56:55.032283 containerd[1626]: time="2025-11-06T23:56:55.032239619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 23:56:56.016842 containerd[1626]: time="2025-11-06T23:56:56.016778749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:56.019139 containerd[1626]: time="2025-11-06T23:56:56.017825682Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 6 23:56:56.019720 containerd[1626]: time="2025-11-06T23:56:56.019676653Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:56.022626 containerd[1626]: time="2025-11-06T23:56:56.022592640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:56.023660 containerd[1626]: time="2025-11-06T23:56:56.023598817Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 991.31714ms" Nov 6 23:56:56.023660 containerd[1626]: time="2025-11-06T23:56:56.023642078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 23:56:56.024353 containerd[1626]: time="2025-11-06T23:56:56.024174216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 23:56:57.656694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424087621.mount: Deactivated successfully. Nov 6 23:56:57.658104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:56:57.659524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:56:57.940292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:56:57.945052 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:56:58.215519 containerd[1626]: time="2025-11-06T23:56:58.215373794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:58.216038 kubelet[2146]: E1106 23:56:58.215671 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:56:58.216395 containerd[1626]: time="2025-11-06T23:56:58.216341749Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 6 23:56:58.217692 containerd[1626]: time="2025-11-06T23:56:58.217648740Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:58.219330 containerd[1626]: time="2025-11-06T23:56:58.219296950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:56:58.219823 containerd[1626]: time="2025-11-06T23:56:58.219788121Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.195450489s" Nov 6 23:56:58.219861 containerd[1626]: time="2025-11-06T23:56:58.219826153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 23:56:58.220373 containerd[1626]: time="2025-11-06T23:56:58.220346969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 23:56:58.221527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:56:58.221719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:56:58.222100 systemd[1]: kubelet.service: Consumed 526ms CPU time, 110.9M memory peak. Nov 6 23:56:58.722409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518075936.mount: Deactivated successfully. Nov 6 23:57:03.393399 containerd[1626]: time="2025-11-06T23:57:03.393338417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:03.394337 containerd[1626]: time="2025-11-06T23:57:03.394037608Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 6 23:57:03.395328 containerd[1626]: time="2025-11-06T23:57:03.395289124Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:03.398197 containerd[1626]: time="2025-11-06T23:57:03.398137735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:03.399184 containerd[1626]: time="2025-11-06T23:57:03.399147879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 5.178769101s" Nov 6 23:57:03.399228 containerd[1626]: time="2025-11-06T23:57:03.399184197Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 23:57:03.399896 containerd[1626]: time="2025-11-06T23:57:03.399861938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 23:57:04.040307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41934051.mount: Deactivated successfully. Nov 6 23:57:04.047643 containerd[1626]: time="2025-11-06T23:57:04.047574720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:04.048428 containerd[1626]: time="2025-11-06T23:57:04.048391431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 6 23:57:04.049798 containerd[1626]: time="2025-11-06T23:57:04.049754096Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:04.051724 containerd[1626]: time="2025-11-06T23:57:04.051703541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:04.052347 containerd[1626]: time="2025-11-06T23:57:04.052317502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 652.428995ms" Nov 6 23:57:04.052347 containerd[1626]: time="2025-11-06T23:57:04.052348250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 23:57:04.052863 containerd[1626]: time="2025-11-06T23:57:04.052840052Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 23:57:07.976908 containerd[1626]: time="2025-11-06T23:57:07.976842395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:07.978074 containerd[1626]: time="2025-11-06T23:57:07.977625814Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 6 23:57:07.980110 containerd[1626]: time="2025-11-06T23:57:07.979961323Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:07.983550 containerd[1626]: time="2025-11-06T23:57:07.983510558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:07.984443 containerd[1626]: time="2025-11-06T23:57:07.984412149Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.931547351s" Nov 6 23:57:07.984443 containerd[1626]: time="2025-11-06T23:57:07.984443037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 23:57:08.262384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 23:57:08.264105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:57:08.473927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:57:08.489416 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:57:08.538719 kubelet[2285]: E1106 23:57:08.538546 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:57:08.542853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:57:08.543034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:57:08.543445 systemd[1]: kubelet.service: Consumed 230ms CPU time, 108.8M memory peak. Nov 6 23:57:10.388340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:57:10.388504 systemd[1]: kubelet.service: Consumed 230ms CPU time, 108.8M memory peak. Nov 6 23:57:10.390730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:57:10.418156 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-7.scope)... Nov 6 23:57:10.418172 systemd[1]: Reloading... Nov 6 23:57:10.502229 zram_generator::config[2346]: No configuration found. Nov 6 23:57:11.493321 systemd[1]: Reloading finished in 1074 ms. Nov 6 23:57:11.550863 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 23:57:11.550959 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 23:57:11.551251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:57:11.551290 systemd[1]: kubelet.service: Consumed 153ms CPU time, 98.1M memory peak. Nov 6 23:57:11.552834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:57:11.734802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:57:11.739220 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:57:11.773747 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:57:11.773747 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:57:11.773747 kubelet[2394]: I1106 23:57:11.773721 2394 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:57:12.380465 kubelet[2394]: I1106 23:57:12.380424 2394 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 23:57:12.380465 kubelet[2394]: I1106 23:57:12.380453 2394 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:57:12.382188 kubelet[2394]: I1106 23:57:12.382153 2394 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 23:57:12.382188 kubelet[2394]: I1106 23:57:12.382183 2394 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:57:12.382506 kubelet[2394]: I1106 23:57:12.382478 2394 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:57:12.809690 kubelet[2394]: I1106 23:57:12.809439 2394 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:57:12.809690 kubelet[2394]: E1106 23:57:12.809449 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:57:12.813777 kubelet[2394]: I1106 23:57:12.813755 2394 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 23:57:12.819090 kubelet[2394]: I1106 23:57:12.819069 2394 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 23:57:12.819864 kubelet[2394]: I1106 23:57:12.819831 2394 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:57:12.820027 kubelet[2394]: I1106 23:57:12.819859 2394 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:57:12.820206 kubelet[2394]: I1106 23:57:12.820044 2394 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:57:12.820206 kubelet[2394]: I1106 23:57:12.820052 2394 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 23:57:12.820206 kubelet[2394]: I1106 23:57:12.820182 2394 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 23:57:12.824300 kubelet[2394]: I1106 23:57:12.824257 2394 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:57:12.824626 kubelet[2394]: I1106 23:57:12.824596 2394 kubelet.go:475] "Attempting to sync node with API server" Nov 6 23:57:12.824626 kubelet[2394]: I1106 23:57:12.824622 2394 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:57:12.824678 kubelet[2394]: I1106 23:57:12.824660 2394 kubelet.go:387] "Adding apiserver pod source" Nov 6 23:57:12.824701 kubelet[2394]: I1106 23:57:12.824695 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:57:12.825238 kubelet[2394]: E1106 23:57:12.825205 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:57:12.825619 kubelet[2394]: E1106 23:57:12.825560 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:57:12.829385 kubelet[2394]: I1106 23:57:12.829342 2394 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 23:57:12.829917 kubelet[2394]: I1106 23:57:12.829886 2394 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:57:12.829917 kubelet[2394]: I1106 23:57:12.829913 2394 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 23:57:12.829991 kubelet[2394]: W1106 23:57:12.829975 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:57:12.834538 kubelet[2394]: I1106 23:57:12.834059 2394 server.go:1262] "Started kubelet" Nov 6 23:57:12.835089 kubelet[2394]: I1106 23:57:12.834581 2394 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:57:12.835089 kubelet[2394]: I1106 23:57:12.834624 2394 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 23:57:12.835089 kubelet[2394]: I1106 23:57:12.835000 2394 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:57:12.835188 kubelet[2394]: I1106 23:57:12.835090 2394 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:57:12.835988 kubelet[2394]: I1106 23:57:12.835972 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:57:12.838425 kubelet[2394]: I1106 23:57:12.838405 2394 server.go:310] "Adding debug handlers to kubelet server" Nov 6 23:57:12.839524 kubelet[2394]: I1106 23:57:12.838948 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:57:12.839524 kubelet[2394]: E1106 23:57:12.838348 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875903fca229722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 23:57:12.834021154 +0000 UTC m=+1.091450630,LastTimestamp:2025-11-06 23:57:12.834021154 +0000 UTC m=+1.091450630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 23:57:12.840392 kubelet[2394]: E1106 23:57:12.840372 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:57:12.840442 kubelet[2394]: I1106 23:57:12.840410 2394 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 23:57:12.840619 kubelet[2394]: I1106 23:57:12.840602 2394 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 23:57:12.840675 kubelet[2394]: I1106 23:57:12.840649 2394 reconciler.go:29] "Reconciler: start to sync state" Nov 6 23:57:12.840991 kubelet[2394]: E1106 23:57:12.840962 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:57:12.841267 kubelet[2394]: I1106 23:57:12.841240 2394 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:57:12.841348 kubelet[2394]: I1106 23:57:12.841324 2394 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:57:12.842216 kubelet[2394]: E1106 23:57:12.842200 2394 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:57:12.842717 kubelet[2394]: E1106 23:57:12.842696 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Nov 6 23:57:12.842877 kubelet[2394]: I1106 23:57:12.842857 2394 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:57:12.869157 kubelet[2394]: I1106 23:57:12.869099 2394 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:57:12.869157 kubelet[2394]: I1106 23:57:12.869119 2394 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:57:12.869157 kubelet[2394]: I1106 23:57:12.869152 2394 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:57:12.869668 kubelet[2394]: I1106 23:57:12.869633 2394 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 23:57:12.871063 kubelet[2394]: I1106 23:57:12.871037 2394 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 23:57:12.871105 kubelet[2394]: I1106 23:57:12.871092 2394 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 23:57:12.871281 kubelet[2394]: I1106 23:57:12.871267 2394 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 23:57:12.871821 kubelet[2394]: E1106 23:57:12.871319 2394 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:57:12.871821 kubelet[2394]: E1106 23:57:12.871759 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:57:12.940758 kubelet[2394]: E1106 23:57:12.940703 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:57:12.972087 kubelet[2394]: E1106 23:57:12.972050 2394 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 23:57:13.041368 kubelet[2394]: E1106 23:57:13.041329 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:57:13.044085 kubelet[2394]: E1106 23:57:13.044042 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Nov 6 23:57:13.091179 kubelet[2394]: I1106 23:57:13.091061 2394 policy_none.go:49] "None policy: Start" Nov 6 23:57:13.091179 kubelet[2394]: I1106 23:57:13.091092 2394 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 23:57:13.091179 kubelet[2394]: I1106 23:57:13.091112 2394 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 23:57:13.094794 kubelet[2394]: I1106 23:57:13.094757 2394 policy_none.go:47] "Start" Nov 6 23:57:13.098991 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:57:13.113535 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:57:13.141611 kubelet[2394]: E1106 23:57:13.141509 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:57:13.156015 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:57:13.158749 kubelet[2394]: E1106 23:57:13.158696 2394 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:57:13.159076 kubelet[2394]: I1106 23:57:13.158986 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:57:13.159076 kubelet[2394]: I1106 23:57:13.159015 2394 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:57:13.159423 kubelet[2394]: I1106 23:57:13.159383 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:57:13.160216 kubelet[2394]: E1106 23:57:13.160194 2394 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:57:13.160355 kubelet[2394]: E1106 23:57:13.160326 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 23:57:13.181889 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 6 23:57:13.200688 kubelet[2394]: E1106 23:57:13.200653 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:13.203469 systemd[1]: Created slice kubepods-burstable-pod00bd06805ff8f7d6616d9bfc19eb467f.slice - libcontainer container kubepods-burstable-pod00bd06805ff8f7d6616d9bfc19eb467f.slice. Nov 6 23:57:13.206261 kubelet[2394]: E1106 23:57:13.206241 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:13.210491 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 6 23:57:13.212042 kubelet[2394]: E1106 23:57:13.212026 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:13.242377 kubelet[2394]: I1106 23:57:13.242344 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:13.242429 kubelet[2394]: I1106 23:57:13.242387 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 23:57:13.242429 kubelet[2394]: I1106 23:57:13.242419 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00bd06805ff8f7d6616d9bfc19eb467f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00bd06805ff8f7d6616d9bfc19eb467f\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:13.242492 kubelet[2394]: I1106 23:57:13.242438 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00bd06805ff8f7d6616d9bfc19eb467f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00bd06805ff8f7d6616d9bfc19eb467f\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:13.242492 kubelet[2394]: I1106 23:57:13.242463 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:13.242548 kubelet[2394]: I1106 23:57:13.242527 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:13.242568 kubelet[2394]: I1106 23:57:13.242558 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:13.242599 kubelet[2394]: I1106 23:57:13.242575 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00bd06805ff8f7d6616d9bfc19eb467f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00bd06805ff8f7d6616d9bfc19eb467f\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:13.242599 kubelet[2394]: I1106 23:57:13.242589 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:13.260984 kubelet[2394]: I1106 23:57:13.260955 2394 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:57:13.261346 kubelet[2394]: E1106 23:57:13.261313 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 6 23:57:13.445332 kubelet[2394]: E1106 23:57:13.445238 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Nov 6 23:57:13.463303 kubelet[2394]: I1106 23:57:13.463270 2394 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:57:13.463593 kubelet[2394]: E1106 23:57:13.463560 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 6 23:57:13.505516 kubelet[2394]: E1106 23:57:13.505451 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:13.506511 containerd[1626]: time="2025-11-06T23:57:13.506453124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:13.510035 kubelet[2394]: E1106 23:57:13.509998 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:13.510594 containerd[1626]: time="2025-11-06T23:57:13.510540067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00bd06805ff8f7d6616d9bfc19eb467f,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:13.515101 kubelet[2394]: E1106 23:57:13.515076 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:13.515563 containerd[1626]: time="2025-11-06T23:57:13.515513071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:13.776707 kubelet[2394]: E1106 23:57:13.776584 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:57:13.786991 kubelet[2394]: E1106 23:57:13.786938 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:57:13.865873 kubelet[2394]: I1106 23:57:13.865827 2394 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:57:13.866309 kubelet[2394]: E1106 23:57:13.866228 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 6 23:57:13.979392 kubelet[2394]: E1106 23:57:13.979356 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:57:14.054481 kubelet[2394]: E1106 23:57:14.054316 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875903fca229722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 23:57:12.834021154 +0000 UTC m=+1.091450630,LastTimestamp:2025-11-06 23:57:12.834021154 +0000 UTC m=+1.091450630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 23:57:14.246260 kubelet[2394]: E1106 23:57:14.246207 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Nov 6 23:57:14.334647 kubelet[2394]: E1106 23:57:14.334530 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:57:14.667628 kubelet[2394]: I1106 23:57:14.667500 2394 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:57:14.667975 kubelet[2394]: E1106 23:57:14.667938 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 6 23:57:14.925783 kubelet[2394]: E1106 23:57:14.925649 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:57:15.118644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799009030.mount: Deactivated successfully. Nov 6 23:57:15.124014 containerd[1626]: time="2025-11-06T23:57:15.123974015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:57:15.127179 containerd[1626]: time="2025-11-06T23:57:15.127150822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 23:57:15.128177 containerd[1626]: time="2025-11-06T23:57:15.128094712Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:57:15.130080 containerd[1626]: time="2025-11-06T23:57:15.130044147Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:57:15.130840 containerd[1626]: time="2025-11-06T23:57:15.130809271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 23:57:15.131770 containerd[1626]: time="2025-11-06T23:57:15.131727774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:57:15.132600 containerd[1626]: time="2025-11-06T23:57:15.132572237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 23:57:15.133692 containerd[1626]: time="2025-11-06T23:57:15.133667811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:57:15.134300 containerd[1626]: time="2025-11-06T23:57:15.134261324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.624445595s" Nov 6 23:57:15.136826 containerd[1626]: time="2025-11-06T23:57:15.136794193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.618797365s" Nov 6 23:57:15.137596 containerd[1626]: time="2025-11-06T23:57:15.137541414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.624357991s" Nov 6 23:57:15.274730 containerd[1626]: time="2025-11-06T23:57:15.274506403Z" level=info msg="connecting to shim 93e6744bd1b7ea2e6fad490dedcd4e08f999afc4cf48fbe7cc1831d458186e03" address="unix:///run/containerd/s/da52b98c1322ac401a19f40b67ff93fb3a56ee97ff4968aad143393fb099b38b" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:15.282927 containerd[1626]: time="2025-11-06T23:57:15.282881467Z" level=info msg="connecting to shim e5110d7907cd607006dadc0a683f3a0e00be79d524b99a2df2bbf91315b1f842" address="unix:///run/containerd/s/2ffbc932c7e8e15afcd5d8c1a465cb53259783e7e8d4e06a0e6821be9809b917" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:15.292192 containerd[1626]: time="2025-11-06T23:57:15.292112124Z" level=info msg="connecting to shim 264dc46ee5204af39d23b116d47e01b90dad86e7216d084655947d437975f37e" address="unix:///run/containerd/s/73df84050aea799ebb01ed22c9e9af5ffd1e6a30cac14cef50f59daf41d037af" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:15.308356 systemd[1]: Started cri-containerd-93e6744bd1b7ea2e6fad490dedcd4e08f999afc4cf48fbe7cc1831d458186e03.scope - libcontainer container 93e6744bd1b7ea2e6fad490dedcd4e08f999afc4cf48fbe7cc1831d458186e03. Nov 6 23:57:15.324249 systemd[1]: Started cri-containerd-264dc46ee5204af39d23b116d47e01b90dad86e7216d084655947d437975f37e.scope - libcontainer container 264dc46ee5204af39d23b116d47e01b90dad86e7216d084655947d437975f37e. Nov 6 23:57:15.325873 systemd[1]: Started cri-containerd-e5110d7907cd607006dadc0a683f3a0e00be79d524b99a2df2bbf91315b1f842.scope - libcontainer container e5110d7907cd607006dadc0a683f3a0e00be79d524b99a2df2bbf91315b1f842. Nov 6 23:57:15.382267 containerd[1626]: time="2025-11-06T23:57:15.382230556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e6744bd1b7ea2e6fad490dedcd4e08f999afc4cf48fbe7cc1831d458186e03\"" Nov 6 23:57:15.383790 kubelet[2394]: E1106 23:57:15.383753 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:15.391558 containerd[1626]: time="2025-11-06T23:57:15.391508202Z" level=info msg="CreateContainer within sandbox \"93e6744bd1b7ea2e6fad490dedcd4e08f999afc4cf48fbe7cc1831d458186e03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:57:15.393710 containerd[1626]: time="2025-11-06T23:57:15.393682689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"264dc46ee5204af39d23b116d47e01b90dad86e7216d084655947d437975f37e\"" Nov 6 23:57:15.394308 kubelet[2394]: E1106 23:57:15.394277 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:15.396349 containerd[1626]: time="2025-11-06T23:57:15.396316217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00bd06805ff8f7d6616d9bfc19eb467f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5110d7907cd607006dadc0a683f3a0e00be79d524b99a2df2bbf91315b1f842\"" Nov 6 23:57:15.396943 kubelet[2394]: E1106 23:57:15.396916 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:15.398240 containerd[1626]: time="2025-11-06T23:57:15.398206722Z" level=info msg="CreateContainer within sandbox \"264dc46ee5204af39d23b116d47e01b90dad86e7216d084655947d437975f37e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:57:15.401851 containerd[1626]: time="2025-11-06T23:57:15.401825958Z" level=info msg="CreateContainer within sandbox \"e5110d7907cd607006dadc0a683f3a0e00be79d524b99a2df2bbf91315b1f842\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:57:15.410006 containerd[1626]: time="2025-11-06T23:57:15.409960019Z" level=info msg="Container bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:15.413573 containerd[1626]: time="2025-11-06T23:57:15.413529291Z" level=info msg="Container 439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:15.416464 containerd[1626]: time="2025-11-06T23:57:15.416429008Z" level=info msg="Container 6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:15.420309 containerd[1626]: time="2025-11-06T23:57:15.420281983Z" level=info msg="CreateContainer within sandbox \"264dc46ee5204af39d23b116d47e01b90dad86e7216d084655947d437975f37e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b\"" Nov 6 23:57:15.420792 containerd[1626]: time="2025-11-06T23:57:15.420770809Z" level=info msg="StartContainer for \"bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b\"" Nov 6 23:57:15.421770 containerd[1626]: time="2025-11-06T23:57:15.421743303Z" level=info msg="connecting to shim bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b" address="unix:///run/containerd/s/73df84050aea799ebb01ed22c9e9af5ffd1e6a30cac14cef50f59daf41d037af" protocol=ttrpc version=3 Nov 6 23:57:15.422391 containerd[1626]: time="2025-11-06T23:57:15.422369647Z" level=info msg="CreateContainer within sandbox \"93e6744bd1b7ea2e6fad490dedcd4e08f999afc4cf48fbe7cc1831d458186e03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea\"" Nov 6 23:57:15.422811 containerd[1626]: time="2025-11-06T23:57:15.422782481Z" level=info msg="StartContainer for \"439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea\"" Nov 6 23:57:15.423812 containerd[1626]: time="2025-11-06T23:57:15.423789790Z" level=info msg="connecting to shim 439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea" address="unix:///run/containerd/s/da52b98c1322ac401a19f40b67ff93fb3a56ee97ff4968aad143393fb099b38b" protocol=ttrpc version=3 Nov 6 23:57:15.425884 containerd[1626]: time="2025-11-06T23:57:15.425849311Z" level=info msg="CreateContainer within sandbox \"e5110d7907cd607006dadc0a683f3a0e00be79d524b99a2df2bbf91315b1f842\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b\"" Nov 6 23:57:15.426281 containerd[1626]: time="2025-11-06T23:57:15.426262105Z" level=info msg="StartContainer for \"6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b\"" Nov 6 23:57:15.427600 containerd[1626]: time="2025-11-06T23:57:15.427532147Z" level=info msg="connecting to shim 6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b" address="unix:///run/containerd/s/2ffbc932c7e8e15afcd5d8c1a465cb53259783e7e8d4e06a0e6821be9809b917" protocol=ttrpc version=3 Nov 6 23:57:15.442259 systemd[1]: Started cri-containerd-bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b.scope - libcontainer container bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b. Nov 6 23:57:15.446926 systemd[1]: Started cri-containerd-439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea.scope - libcontainer container 439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea. Nov 6 23:57:15.448144 systemd[1]: Started cri-containerd-6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b.scope - libcontainer container 6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b. Nov 6 23:57:15.501399 containerd[1626]: time="2025-11-06T23:57:15.501355045Z" level=info msg="StartContainer for \"bad97c4363e9d43aecba9c8351c267a7d886f9e98b039965f0d08484a3d0198b\" returns successfully" Nov 6 23:57:15.514493 containerd[1626]: time="2025-11-06T23:57:15.514449076Z" level=info msg="StartContainer for \"6ece68495c739c3168ceca8d0201f831e57ad32946aa4c4590c45c5debc71c8b\" returns successfully" Nov 6 23:57:15.533201 containerd[1626]: time="2025-11-06T23:57:15.532572928Z" level=info msg="StartContainer for \"439cff6af278c33e8c21e226cf92b484dcfd0423cd71db7c9db2624abedd37ea\" returns successfully" Nov 6 23:57:15.880499 kubelet[2394]: E1106 23:57:15.880332 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:15.880499 kubelet[2394]: E1106 23:57:15.880454 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:15.887053 kubelet[2394]: E1106 23:57:15.887023 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:15.887177 kubelet[2394]: E1106 23:57:15.887157 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:15.889215 kubelet[2394]: E1106 23:57:15.889192 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:15.889311 kubelet[2394]: E1106 23:57:15.889291 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:16.271517 kubelet[2394]: I1106 23:57:16.270184 2394 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:57:16.894442 kubelet[2394]: E1106 23:57:16.894403 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:16.894577 kubelet[2394]: E1106 23:57:16.894565 2394 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:57:16.894725 kubelet[2394]: E1106 23:57:16.894699 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:16.894782 kubelet[2394]: E1106 23:57:16.894760 2394 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:16.905952 kubelet[2394]: E1106 23:57:16.905914 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 23:57:16.990551 kubelet[2394]: I1106 23:57:16.990501 2394 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 23:57:17.044034 kubelet[2394]: I1106 23:57:17.043988 2394 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:57:17.083874 kubelet[2394]: E1106 23:57:17.083826 2394 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 23:57:17.083874 kubelet[2394]: I1106 23:57:17.083862 2394 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:17.085522 kubelet[2394]: E1106 23:57:17.085495 2394 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:17.085522 kubelet[2394]: I1106 23:57:17.085510 2394 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:17.086967 kubelet[2394]: E1106 23:57:17.086931 2394 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:17.829202 kubelet[2394]: I1106 23:57:17.829161 2394 apiserver.go:52] "Watching apiserver" Nov 6 23:57:17.841657 kubelet[2394]: I1106 23:57:17.841612 2394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 23:57:19.184747 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... Nov 6 23:57:19.184764 systemd[1]: Reloading... Nov 6 23:57:19.253152 zram_generator::config[2727]: No configuration found. Nov 6 23:57:19.486474 systemd[1]: Reloading finished in 301 ms. Nov 6 23:57:19.507529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:57:19.523807 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:57:19.524213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:57:19.524282 systemd[1]: kubelet.service: Consumed 1.240s CPU time, 124.9M memory peak. Nov 6 23:57:19.526786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:57:19.773206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:57:19.788559 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:57:19.829148 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:57:19.829148 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:57:19.829538 kubelet[2772]: I1106 23:57:19.829166 2772 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:57:19.837434 kubelet[2772]: I1106 23:57:19.837373 2772 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 23:57:19.837434 kubelet[2772]: I1106 23:57:19.837410 2772 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:57:19.837558 kubelet[2772]: I1106 23:57:19.837452 2772 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 23:57:19.837558 kubelet[2772]: I1106 23:57:19.837467 2772 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:57:19.837787 kubelet[2772]: I1106 23:57:19.837762 2772 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:57:19.839263 kubelet[2772]: I1106 23:57:19.839238 2772 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 23:57:19.842987 kubelet[2772]: I1106 23:57:19.842890 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:57:19.848840 kubelet[2772]: I1106 23:57:19.848805 2772 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 23:57:19.854565 kubelet[2772]: I1106 23:57:19.854527 2772 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 23:57:19.855017 kubelet[2772]: I1106 23:57:19.854948 2772 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:57:19.855215 kubelet[2772]: I1106 23:57:19.855024 2772 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:57:19.855332 kubelet[2772]: I1106 23:57:19.855226 2772 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:57:19.855332 kubelet[2772]: I1106 23:57:19.855240 2772 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 23:57:19.855332 kubelet[2772]: I1106 23:57:19.855267 2772 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 23:57:19.856050 kubelet[2772]: I1106 23:57:19.856023 2772 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:57:19.856259 kubelet[2772]: I1106 23:57:19.856227 2772 kubelet.go:475] "Attempting to sync node with API server" Nov 6 23:57:19.856259 kubelet[2772]: I1106 23:57:19.856254 2772 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:57:19.856401 kubelet[2772]: I1106 23:57:19.856281 2772 kubelet.go:387] "Adding apiserver pod source" Nov 6 23:57:19.856401 kubelet[2772]: I1106 23:57:19.856305 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:57:19.857852 kubelet[2772]: I1106 23:57:19.857804 2772 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 23:57:19.858566 kubelet[2772]: I1106 23:57:19.858544 2772 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:57:19.858645 kubelet[2772]: I1106 23:57:19.858584 2772 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 23:57:19.865929 kubelet[2772]: I1106 23:57:19.863600 2772 server.go:1262] "Started kubelet" Nov 6 23:57:19.865929 kubelet[2772]: I1106 23:57:19.865312 2772 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:57:19.865929 kubelet[2772]: I1106 23:57:19.865410 2772 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 23:57:19.865929 kubelet[2772]: I1106 23:57:19.865497 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:57:19.865929 kubelet[2772]: I1106 23:57:19.865661 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:57:19.865929 kubelet[2772]: I1106 23:57:19.865741 2772 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:57:19.866808 kubelet[2772]: I1106 23:57:19.866791 2772 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 23:57:19.867007 kubelet[2772]: I1106 23:57:19.866993 2772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 23:57:19.867293 kubelet[2772]: I1106 23:57:19.867260 2772 reconciler.go:29] "Reconciler: start to sync state" Nov 6 23:57:19.871284 kubelet[2772]: I1106 23:57:19.871212 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:57:19.871367 kubelet[2772]: I1106 23:57:19.871342 2772 server.go:310] "Adding debug handlers to kubelet server" Nov 6 23:57:19.872485 kubelet[2772]: I1106 23:57:19.872455 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:57:19.874911 kubelet[2772]: I1106 23:57:19.874870 2772 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:57:19.874911 kubelet[2772]: I1106 23:57:19.874890 2772 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:57:19.880979 kubelet[2772]: I1106 23:57:19.880930 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 23:57:19.882300 kubelet[2772]: I1106 23:57:19.882268 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 23:57:19.882300 kubelet[2772]: I1106 23:57:19.882291 2772 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 23:57:19.882374 kubelet[2772]: I1106 23:57:19.882321 2772 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 23:57:19.882374 kubelet[2772]: E1106 23:57:19.882362 2772 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:57:19.913988 kubelet[2772]: I1106 23:57:19.913951 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:57:19.913988 kubelet[2772]: I1106 23:57:19.913969 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:57:19.913988 kubelet[2772]: I1106 23:57:19.913987 2772 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914323 2772 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914336 2772 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914354 2772 policy_none.go:49] "None policy: Start" Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914362 2772 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914373 2772 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914452 2772 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 23:57:19.914953 kubelet[2772]: I1106 23:57:19.914459 2772 policy_none.go:47] "Start" Nov 6 23:57:19.918326 kubelet[2772]: E1106 23:57:19.918300 2772 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:57:19.918514 kubelet[2772]: I1106 23:57:19.918500 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:57:19.918554 kubelet[2772]: I1106 23:57:19.918513 2772 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:57:19.919009 kubelet[2772]: I1106 23:57:19.918990 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:57:19.919638 kubelet[2772]: E1106 23:57:19.919620 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:57:19.983068 kubelet[2772]: I1106 23:57:19.983014 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:19.983216 kubelet[2772]: I1106 23:57:19.983110 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:19.983271 kubelet[2772]: I1106 23:57:19.983220 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:57:20.025338 kubelet[2772]: I1106 23:57:20.025233 2772 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:57:20.034229 kubelet[2772]: I1106 23:57:20.034195 2772 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 23:57:20.034364 kubelet[2772]: I1106 23:57:20.034279 2772 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 23:57:20.068722 kubelet[2772]: I1106 23:57:20.068667 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.068722 kubelet[2772]: I1106 23:57:20.068720 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.068838 kubelet[2772]: I1106 23:57:20.068744 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.068838 kubelet[2772]: I1106 23:57:20.068764 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 23:57:20.068838 kubelet[2772]: I1106 23:57:20.068780 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00bd06805ff8f7d6616d9bfc19eb467f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00bd06805ff8f7d6616d9bfc19eb467f\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:20.068838 kubelet[2772]: I1106 23:57:20.068792 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00bd06805ff8f7d6616d9bfc19eb467f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00bd06805ff8f7d6616d9bfc19eb467f\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:20.068838 kubelet[2772]: I1106 23:57:20.068806 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.068956 kubelet[2772]: I1106 23:57:20.068821 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.068956 kubelet[2772]: I1106 23:57:20.068835 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00bd06805ff8f7d6616d9bfc19eb467f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00bd06805ff8f7d6616d9bfc19eb467f\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:57:20.189617 sudo[2811]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:57:20.189967 sudo[2811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:57:20.290096 kubelet[2772]: E1106 23:57:20.289601 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:20.290096 kubelet[2772]: E1106 23:57:20.289790 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:20.290096 kubelet[2772]: E1106 23:57:20.289851 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:20.503442 sudo[2811]: pam_unix(sudo:session): session closed for user root Nov 6 23:57:20.857363 kubelet[2772]: I1106 23:57:20.857313 2772 apiserver.go:52] "Watching apiserver" Nov 6 23:57:20.906226 kubelet[2772]: I1106 23:57:20.906168 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.906548 kubelet[2772]: E1106 23:57:20.906273 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:20.906548 kubelet[2772]: E1106 23:57:20.906351 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:20.912408 kubelet[2772]: E1106 23:57:20.912381 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:57:20.912684 kubelet[2772]: E1106 23:57:20.912671 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:20.955644 kubelet[2772]: I1106 23:57:20.955572 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9555253320000001 podStartE2EDuration="1.955525332s" podCreationTimestamp="2025-11-06 23:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:57:20.955514151 +0000 UTC m=+1.161370494" watchObservedRunningTime="2025-11-06 23:57:20.955525332 +0000 UTC m=+1.161381676" Nov 6 23:57:20.967364 kubelet[2772]: I1106 23:57:20.967327 2772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 23:57:20.995135 kubelet[2772]: I1106 23:57:20.995058 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.995043587 podStartE2EDuration="1.995043587s" podCreationTimestamp="2025-11-06 23:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:57:20.994898509 +0000 UTC m=+1.200754852" watchObservedRunningTime="2025-11-06 23:57:20.995043587 +0000 UTC m=+1.200899920" Nov 6 23:57:20.995593 kubelet[2772]: I1106 23:57:20.995561 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.995550189 podStartE2EDuration="1.995550189s" podCreationTimestamp="2025-11-06 23:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:57:20.986610362 +0000 UTC m=+1.192466705" watchObservedRunningTime="2025-11-06 23:57:20.995550189 +0000 UTC m=+1.201406532" Nov 6 23:57:21.831419 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 6 23:57:21.833259 sshd[1824]: Connection closed by 10.0.0.1 port 53592 Nov 6 23:57:21.834040 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 6 23:57:21.838340 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:53592.service: Deactivated successfully. Nov 6 23:57:21.840380 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:57:21.840584 systemd[1]: session-7.scope: Consumed 4.791s CPU time, 258.4M memory peak. Nov 6 23:57:21.841913 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:57:21.842955 systemd-logind[1598]: Removed session 7. Nov 6 23:57:21.905831 kubelet[2772]: E1106 23:57:21.905773 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:21.905831 kubelet[2772]: E1106 23:57:21.905810 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:21.906361 kubelet[2772]: E1106 23:57:21.905936 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:23.671801 kubelet[2772]: E1106 23:57:23.671751 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:25.974674 kubelet[2772]: I1106 23:57:25.974633 2772 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:57:25.975180 containerd[1626]: time="2025-11-06T23:57:25.975070438Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:57:25.975468 kubelet[2772]: I1106 23:57:25.975322 2772 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:57:26.898518 kubelet[2772]: E1106 23:57:26.898387 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:26.913036 kubelet[2772]: E1106 23:57:26.912964 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:26.966004 systemd[1]: Created slice kubepods-besteffort-podb7b80340_c715_4bd3_9ed9_509c19283e66.slice - libcontainer container kubepods-besteffort-podb7b80340_c715_4bd3_9ed9_509c19283e66.slice. Nov 6 23:57:26.983917 systemd[1]: Created slice kubepods-burstable-pod87bdf9df_e0e0_46c1_90b9_d40af36c1376.slice - libcontainer container kubepods-burstable-pod87bdf9df_e0e0_46c1_90b9_d40af36c1376.slice. Nov 6 23:57:27.013413 kubelet[2772]: I1106 23:57:27.013367 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92jkz\" (UniqueName: \"kubernetes.io/projected/b7b80340-c715-4bd3-9ed9-509c19283e66-kube-api-access-92jkz\") pod \"kube-proxy-zcnmt\" (UID: \"b7b80340-c715-4bd3-9ed9-509c19283e66\") " pod="kube-system/kube-proxy-zcnmt" Nov 6 23:57:27.013413 kubelet[2772]: I1106 23:57:27.013398 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-bpf-maps\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.013413 kubelet[2772]: I1106 23:57:27.013417 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-config-path\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.013894 kubelet[2772]: I1106 23:57:27.013431 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hubble-tls\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.013894 kubelet[2772]: I1106 23:57:27.013444 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65dc\" (UniqueName: \"kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-kube-api-access-j65dc\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.013894 kubelet[2772]: I1106 23:57:27.013460 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b80340-c715-4bd3-9ed9-509c19283e66-lib-modules\") pod \"kube-proxy-zcnmt\" (UID: \"b7b80340-c715-4bd3-9ed9-509c19283e66\") " pod="kube-system/kube-proxy-zcnmt" Nov 6 23:57:27.013894 kubelet[2772]: I1106 23:57:27.013472 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-run\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.013894 kubelet[2772]: I1106 23:57:27.013494 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-cgroup\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.013894 kubelet[2772]: I1106 23:57:27.013515 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-kernel\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014038 kubelet[2772]: I1106 23:57:27.013534 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-lib-modules\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014038 kubelet[2772]: I1106 23:57:27.013546 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-xtables-lock\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014038 kubelet[2772]: I1106 23:57:27.013559 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7b80340-c715-4bd3-9ed9-509c19283e66-kube-proxy\") pod \"kube-proxy-zcnmt\" (UID: \"b7b80340-c715-4bd3-9ed9-509c19283e66\") " pod="kube-system/kube-proxy-zcnmt" Nov 6 23:57:27.014038 kubelet[2772]: I1106 23:57:27.013572 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7b80340-c715-4bd3-9ed9-509c19283e66-xtables-lock\") pod \"kube-proxy-zcnmt\" (UID: \"b7b80340-c715-4bd3-9ed9-509c19283e66\") " pod="kube-system/kube-proxy-zcnmt" Nov 6 23:57:27.014038 kubelet[2772]: I1106 23:57:27.013584 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hostproc\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014038 kubelet[2772]: I1106 23:57:27.013596 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cni-path\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014213 kubelet[2772]: I1106 23:57:27.013608 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-etc-cni-netd\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014213 kubelet[2772]: I1106 23:57:27.013621 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87bdf9df-e0e0-46c1-90b9-d40af36c1376-clustermesh-secrets\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.014213 kubelet[2772]: I1106 23:57:27.013634 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-net\") pod \"cilium-66695\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " pod="kube-system/cilium-66695" Nov 6 23:57:27.056417 systemd[1]: Created slice kubepods-besteffort-podae6ccc5f_38de_460a_a9fd_ba4749d438b4.slice - libcontainer container kubepods-besteffort-podae6ccc5f_38de_460a_a9fd_ba4749d438b4.slice. Nov 6 23:57:27.114515 kubelet[2772]: I1106 23:57:27.114471 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-lmjsk\" (UID: \"ae6ccc5f-38de-460a-a9fd-ba4749d438b4\") " pod="kube-system/cilium-operator-6f9c7c5859-lmjsk" Nov 6 23:57:27.114661 kubelet[2772]: I1106 23:57:27.114551 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr6n6\" (UniqueName: \"kubernetes.io/projected/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-kube-api-access-lr6n6\") pod \"cilium-operator-6f9c7c5859-lmjsk\" (UID: \"ae6ccc5f-38de-460a-a9fd-ba4749d438b4\") " pod="kube-system/cilium-operator-6f9c7c5859-lmjsk" Nov 6 23:57:27.283701 kubelet[2772]: E1106 23:57:27.283640 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:27.284379 containerd[1626]: time="2025-11-06T23:57:27.284338692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcnmt,Uid:b7b80340-c715-4bd3-9ed9-509c19283e66,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:27.291475 kubelet[2772]: E1106 23:57:27.291437 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:27.292312 containerd[1626]: time="2025-11-06T23:57:27.292282734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-66695,Uid:87bdf9df-e0e0-46c1-90b9-d40af36c1376,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:27.329302 containerd[1626]: time="2025-11-06T23:57:27.329261932Z" level=info msg="connecting to shim 46b98e8c1945c1be8845b3901b76bc515de53997440403354db32157015b840b" address="unix:///run/containerd/s/f500c867ee0a57c24a408e83e7c23efb02c329480ffb9670ddfe417634a6b5e7" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:27.330611 containerd[1626]: time="2025-11-06T23:57:27.330566013Z" level=info msg="connecting to shim bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4" address="unix:///run/containerd/s/22b4b4b715e9f567450e10195d7ce3aae885ae8dfaa4c77715e98458f2b0918f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:27.363265 kubelet[2772]: E1106 23:57:27.363225 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:27.363700 containerd[1626]: time="2025-11-06T23:57:27.363669046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-lmjsk,Uid:ae6ccc5f-38de-460a-a9fd-ba4749d438b4,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:27.388676 containerd[1626]: time="2025-11-06T23:57:27.388629278Z" level=info msg="connecting to shim b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355" address="unix:///run/containerd/s/64fbea0cd8df739816a52e9937bfc7c6d9984e56d3664bc0569d68f3811eed53" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:27.399324 systemd[1]: Started cri-containerd-46b98e8c1945c1be8845b3901b76bc515de53997440403354db32157015b840b.scope - libcontainer container 46b98e8c1945c1be8845b3901b76bc515de53997440403354db32157015b840b. Nov 6 23:57:27.407652 systemd[1]: Started cri-containerd-bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4.scope - libcontainer container bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4. Nov 6 23:57:27.415290 systemd[1]: Started cri-containerd-b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355.scope - libcontainer container b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355. Nov 6 23:57:27.440680 containerd[1626]: time="2025-11-06T23:57:27.440619462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-66695,Uid:87bdf9df-e0e0-46c1-90b9-d40af36c1376,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\"" Nov 6 23:57:27.444582 kubelet[2772]: E1106 23:57:27.444551 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:27.446652 containerd[1626]: time="2025-11-06T23:57:27.446479179Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:57:27.465411 containerd[1626]: time="2025-11-06T23:57:27.465359998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcnmt,Uid:b7b80340-c715-4bd3-9ed9-509c19283e66,Namespace:kube-system,Attempt:0,} returns sandbox id \"46b98e8c1945c1be8845b3901b76bc515de53997440403354db32157015b840b\"" Nov 6 23:57:27.466052 kubelet[2772]: E1106 23:57:27.466021 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:27.471590 containerd[1626]: time="2025-11-06T23:57:27.471559832Z" level=info msg="CreateContainer within sandbox \"46b98e8c1945c1be8845b3901b76bc515de53997440403354db32157015b840b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:57:27.473376 containerd[1626]: time="2025-11-06T23:57:27.473341251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-lmjsk,Uid:ae6ccc5f-38de-460a-a9fd-ba4749d438b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\"" Nov 6 23:57:27.474179 kubelet[2772]: E1106 23:57:27.474066 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:27.482556 containerd[1626]: time="2025-11-06T23:57:27.482502058Z" level=info msg="Container 14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:27.491178 containerd[1626]: time="2025-11-06T23:57:27.491114731Z" level=info msg="CreateContainer within sandbox \"46b98e8c1945c1be8845b3901b76bc515de53997440403354db32157015b840b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412\"" Nov 6 23:57:27.492346 containerd[1626]: time="2025-11-06T23:57:27.491767072Z" level=info msg="StartContainer for \"14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412\"" Nov 6 23:57:27.493447 containerd[1626]: time="2025-11-06T23:57:27.493402013Z" level=info msg="connecting to shim 14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412" address="unix:///run/containerd/s/f500c867ee0a57c24a408e83e7c23efb02c329480ffb9670ddfe417634a6b5e7" protocol=ttrpc version=3 Nov 6 23:57:27.518269 systemd[1]: Started cri-containerd-14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412.scope - libcontainer container 14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412. Nov 6 23:57:27.568910 containerd[1626]: time="2025-11-06T23:57:27.568809684Z" level=info msg="StartContainer for \"14624c25b1c446bdeb52c254a0b8a428077418e579c1504f26157ded041b4412\" returns successfully" Nov 6 23:57:27.917568 kubelet[2772]: E1106 23:57:27.917458 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:30.075221 kubelet[2772]: E1106 23:57:30.075172 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:30.086220 kubelet[2772]: I1106 23:57:30.086146 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zcnmt" podStartSLOduration=4.086103592 podStartE2EDuration="4.086103592s" podCreationTimestamp="2025-11-06 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:57:27.927474159 +0000 UTC m=+8.133330502" watchObservedRunningTime="2025-11-06 23:57:30.086103592 +0000 UTC m=+10.291959935" Nov 6 23:57:30.384422 update_engine[1602]: I20251106 23:57:30.384264 1602 update_attempter.cc:509] Updating boot flags... Nov 6 23:57:30.922900 kubelet[2772]: E1106 23:57:30.922855 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:31.924789 kubelet[2772]: E1106 23:57:31.924754 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:33.679237 kubelet[2772]: E1106 23:57:33.678737 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:33.928541 kubelet[2772]: E1106 23:57:33.928505 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:38.248910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379174081.mount: Deactivated successfully. Nov 6 23:57:43.290730 kubelet[2772]: E1106 23:57:43.290686 2772 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.408s" Nov 6 23:57:43.615499 containerd[1626]: time="2025-11-06T23:57:43.615370819Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:43.616203 containerd[1626]: time="2025-11-06T23:57:43.616151120Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 23:57:43.617245 containerd[1626]: time="2025-11-06T23:57:43.617198554Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:43.618693 containerd[1626]: time="2025-11-06T23:57:43.618660639Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.172150542s" Nov 6 23:57:43.618745 containerd[1626]: time="2025-11-06T23:57:43.618694744Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 23:57:43.619629 containerd[1626]: time="2025-11-06T23:57:43.619606392Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:57:43.623590 containerd[1626]: time="2025-11-06T23:57:43.623552028Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:57:43.632589 containerd[1626]: time="2025-11-06T23:57:43.632532124Z" level=info msg="Container f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:43.636515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3261638092.mount: Deactivated successfully. Nov 6 23:57:43.640304 containerd[1626]: time="2025-11-06T23:57:43.640253226Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\"" Nov 6 23:57:43.640817 containerd[1626]: time="2025-11-06T23:57:43.640789266Z" level=info msg="StartContainer for \"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\"" Nov 6 23:57:43.641770 containerd[1626]: time="2025-11-06T23:57:43.641731863Z" level=info msg="connecting to shim f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3" address="unix:///run/containerd/s/22b4b4b715e9f567450e10195d7ce3aae885ae8dfaa4c77715e98458f2b0918f" protocol=ttrpc version=3 Nov 6 23:57:43.667283 systemd[1]: Started cri-containerd-f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3.scope - libcontainer container f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3. Nov 6 23:57:43.701757 containerd[1626]: time="2025-11-06T23:57:43.701670712Z" level=info msg="StartContainer for \"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" returns successfully" Nov 6 23:57:43.713885 systemd[1]: cri-containerd-f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3.scope: Deactivated successfully. Nov 6 23:57:43.716876 containerd[1626]: time="2025-11-06T23:57:43.716744732Z" level=info msg="received exit event container_id:\"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" id:\"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" pid:3215 exited_at:{seconds:1762473463 nanos:716365067}" Nov 6 23:57:43.716876 containerd[1626]: time="2025-11-06T23:57:43.716762036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" id:\"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" pid:3215 exited_at:{seconds:1762473463 nanos:716365067}" Nov 6 23:57:43.734987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3-rootfs.mount: Deactivated successfully. Nov 6 23:57:44.291858 kubelet[2772]: E1106 23:57:44.291823 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:45.295221 kubelet[2772]: E1106 23:57:45.295191 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:45.462725 containerd[1626]: time="2025-11-06T23:57:45.462682945Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:57:45.982061 containerd[1626]: time="2025-11-06T23:57:45.982006942Z" level=info msg="Container 987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:46.139834 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:53664.service - OpenSSH per-connection server daemon (10.0.0.1:53664). Nov 6 23:57:46.198205 sshd[3250]: Accepted publickey for core from 10.0.0.1 port 53664 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:57:46.199658 sshd-session[3250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:57:46.203951 systemd-logind[1598]: New session 8 of user core. Nov 6 23:57:46.210244 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:57:46.545817 sshd[3253]: Connection closed by 10.0.0.1 port 53664 Nov 6 23:57:46.546055 sshd-session[3250]: pam_unix(sshd:session): session closed for user core Nov 6 23:57:46.550860 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:53664.service: Deactivated successfully. Nov 6 23:57:46.552793 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:57:46.554498 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:57:46.555667 systemd-logind[1598]: Removed session 8. Nov 6 23:57:46.666533 containerd[1626]: time="2025-11-06T23:57:46.666477367Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\"" Nov 6 23:57:46.667110 containerd[1626]: time="2025-11-06T23:57:46.667061567Z" level=info msg="StartContainer for \"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\"" Nov 6 23:57:46.667910 containerd[1626]: time="2025-11-06T23:57:46.667870280Z" level=info msg="connecting to shim 987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018" address="unix:///run/containerd/s/22b4b4b715e9f567450e10195d7ce3aae885ae8dfaa4c77715e98458f2b0918f" protocol=ttrpc version=3 Nov 6 23:57:46.689265 systemd[1]: Started cri-containerd-987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018.scope - libcontainer container 987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018. Nov 6 23:57:46.798337 containerd[1626]: time="2025-11-06T23:57:46.798159888Z" level=info msg="StartContainer for \"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" returns successfully" Nov 6 23:57:46.814200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:57:46.814488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:57:46.814557 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:57:46.817468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:57:46.823739 systemd[1]: cri-containerd-987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018.scope: Deactivated successfully. Nov 6 23:57:46.824958 containerd[1626]: time="2025-11-06T23:57:46.824845848Z" level=info msg="received exit event container_id:\"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" id:\"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" pid:3279 exited_at:{seconds:1762473466 nanos:824260597}" Nov 6 23:57:46.825359 containerd[1626]: time="2025-11-06T23:57:46.825188373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" id:\"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" pid:3279 exited_at:{seconds:1762473466 nanos:824260597}" Nov 6 23:57:46.847890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018-rootfs.mount: Deactivated successfully. Nov 6 23:57:46.851680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:57:47.301424 kubelet[2772]: E1106 23:57:47.301390 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:47.325054 containerd[1626]: time="2025-11-06T23:57:47.324888728Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:57:47.338672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161920315.mount: Deactivated successfully. Nov 6 23:57:47.344950 containerd[1626]: time="2025-11-06T23:57:47.344824488Z" level=info msg="Container 4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:47.365063 containerd[1626]: time="2025-11-06T23:57:47.364821172Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\"" Nov 6 23:57:47.365365 containerd[1626]: time="2025-11-06T23:57:47.365345399Z" level=info msg="StartContainer for \"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\"" Nov 6 23:57:47.368536 containerd[1626]: time="2025-11-06T23:57:47.368494137Z" level=info msg="connecting to shim 4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a" address="unix:///run/containerd/s/22b4b4b715e9f567450e10195d7ce3aae885ae8dfaa4c77715e98458f2b0918f" protocol=ttrpc version=3 Nov 6 23:57:47.393314 systemd[1]: Started cri-containerd-4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a.scope - libcontainer container 4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a. Nov 6 23:57:47.444005 systemd[1]: cri-containerd-4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a.scope: Deactivated successfully. Nov 6 23:57:47.445709 containerd[1626]: time="2025-11-06T23:57:47.445675062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" id:\"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" pid:3339 exited_at:{seconds:1762473467 nanos:445384724}" Nov 6 23:57:47.445833 containerd[1626]: time="2025-11-06T23:57:47.445746906Z" level=info msg="received exit event container_id:\"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" id:\"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" pid:3339 exited_at:{seconds:1762473467 nanos:445384724}" Nov 6 23:57:47.445938 containerd[1626]: time="2025-11-06T23:57:47.445907008Z" level=info msg="StartContainer for \"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" returns successfully" Nov 6 23:57:47.685640 containerd[1626]: time="2025-11-06T23:57:47.685591490Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:47.686377 containerd[1626]: time="2025-11-06T23:57:47.686358453Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 23:57:47.687749 containerd[1626]: time="2025-11-06T23:57:47.687729494Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:57:47.688849 containerd[1626]: time="2025-11-06T23:57:47.688811611Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.069108596s" Nov 6 23:57:47.688849 containerd[1626]: time="2025-11-06T23:57:47.688836127Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 23:57:47.692564 containerd[1626]: time="2025-11-06T23:57:47.692529260Z" level=info msg="CreateContainer within sandbox \"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:57:47.699733 containerd[1626]: time="2025-11-06T23:57:47.699691272Z" level=info msg="Container b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:47.705824 containerd[1626]: time="2025-11-06T23:57:47.705782129Z" level=info msg="CreateContainer within sandbox \"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\"" Nov 6 23:57:47.706323 containerd[1626]: time="2025-11-06T23:57:47.706247184Z" level=info msg="StartContainer for \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\"" Nov 6 23:57:47.707145 containerd[1626]: time="2025-11-06T23:57:47.707084871Z" level=info msg="connecting to shim b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb" address="unix:///run/containerd/s/64fbea0cd8df739816a52e9937bfc7c6d9984e56d3664bc0569d68f3811eed53" protocol=ttrpc version=3 Nov 6 23:57:47.732288 systemd[1]: Started cri-containerd-b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb.scope - libcontainer container b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb. Nov 6 23:57:47.761877 containerd[1626]: time="2025-11-06T23:57:47.761831653Z" level=info msg="StartContainer for \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" returns successfully" Nov 6 23:57:48.306276 kubelet[2772]: E1106 23:57:48.306104 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:48.443681 kubelet[2772]: E1106 23:57:48.443645 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:48.443838 containerd[1626]: time="2025-11-06T23:57:48.443772714Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:57:48.745635 containerd[1626]: time="2025-11-06T23:57:48.745033493Z" level=info msg="Container 5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:48.909387 containerd[1626]: time="2025-11-06T23:57:48.909335748Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\"" Nov 6 23:57:48.909967 containerd[1626]: time="2025-11-06T23:57:48.909923624Z" level=info msg="StartContainer for \"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\"" Nov 6 23:57:48.910814 containerd[1626]: time="2025-11-06T23:57:48.910774576Z" level=info msg="connecting to shim 5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7" address="unix:///run/containerd/s/22b4b4b715e9f567450e10195d7ce3aae885ae8dfaa4c77715e98458f2b0918f" protocol=ttrpc version=3 Nov 6 23:57:48.934260 systemd[1]: Started cri-containerd-5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7.scope - libcontainer container 5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7. Nov 6 23:57:48.968963 containerd[1626]: time="2025-11-06T23:57:48.968922453Z" level=info msg="StartContainer for \"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" returns successfully" Nov 6 23:57:48.968968 systemd[1]: cri-containerd-5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7.scope: Deactivated successfully. Nov 6 23:57:48.971954 containerd[1626]: time="2025-11-06T23:57:48.971880851Z" level=info msg="received exit event container_id:\"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" id:\"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" pid:3417 exited_at:{seconds:1762473468 nanos:971663422}" Nov 6 23:57:48.974201 containerd[1626]: time="2025-11-06T23:57:48.972286214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" id:\"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" pid:3417 exited_at:{seconds:1762473468 nanos:971663422}" Nov 6 23:57:48.995733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7-rootfs.mount: Deactivated successfully. Nov 6 23:57:49.314860 kubelet[2772]: E1106 23:57:49.314739 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:49.315360 kubelet[2772]: E1106 23:57:49.315281 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:49.319909 containerd[1626]: time="2025-11-06T23:57:49.319868382Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:57:49.332149 kubelet[2772]: I1106 23:57:49.332041 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-lmjsk" podStartSLOduration=2.117527888 podStartE2EDuration="22.332018934s" podCreationTimestamp="2025-11-06 23:57:27 +0000 UTC" firstStartedPulling="2025-11-06 23:57:27.47489721 +0000 UTC m=+7.680753553" lastFinishedPulling="2025-11-06 23:57:47.689388266 +0000 UTC m=+27.895244599" observedRunningTime="2025-11-06 23:57:48.970560166 +0000 UTC m=+29.176416509" watchObservedRunningTime="2025-11-06 23:57:49.332018934 +0000 UTC m=+29.537875277" Nov 6 23:57:49.334495 containerd[1626]: time="2025-11-06T23:57:49.334436513Z" level=info msg="Container 40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:49.341405 containerd[1626]: time="2025-11-06T23:57:49.341370279Z" level=info msg="CreateContainer within sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\"" Nov 6 23:57:49.341962 containerd[1626]: time="2025-11-06T23:57:49.341929802Z" level=info msg="StartContainer for \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\"" Nov 6 23:57:49.342741 containerd[1626]: time="2025-11-06T23:57:49.342694231Z" level=info msg="connecting to shim 40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c" address="unix:///run/containerd/s/22b4b4b715e9f567450e10195d7ce3aae885ae8dfaa4c77715e98458f2b0918f" protocol=ttrpc version=3 Nov 6 23:57:49.367348 systemd[1]: Started cri-containerd-40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c.scope - libcontainer container 40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c. Nov 6 23:57:49.405709 containerd[1626]: time="2025-11-06T23:57:49.405165107Z" level=info msg="StartContainer for \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" returns successfully" Nov 6 23:57:49.490805 containerd[1626]: time="2025-11-06T23:57:49.490762338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" id:\"52d9084274968ee7fe2adf03b1ba5a64f14cdee3dcc2aa6c94d080ea5d61289a\" pid:3486 exited_at:{seconds:1762473469 nanos:490400636}" Nov 6 23:57:49.573520 kubelet[2772]: I1106 23:57:49.572715 2772 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 23:57:49.643994 systemd[1]: Created slice kubepods-burstable-pod9b3ab03c_4131_4f88_95f4_53bfba6de5d1.slice - libcontainer container kubepods-burstable-pod9b3ab03c_4131_4f88_95f4_53bfba6de5d1.slice. Nov 6 23:57:49.653681 systemd[1]: Created slice kubepods-burstable-pod69c1f7b0_a284_408e_a227_28268b23a6b7.slice - libcontainer container kubepods-burstable-pod69c1f7b0_a284_408e_a227_28268b23a6b7.slice. Nov 6 23:57:49.728155 kubelet[2772]: I1106 23:57:49.728101 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69c1f7b0-a284-408e-a227-28268b23a6b7-config-volume\") pod \"coredns-66bc5c9577-46wwn\" (UID: \"69c1f7b0-a284-408e-a227-28268b23a6b7\") " pod="kube-system/coredns-66bc5c9577-46wwn" Nov 6 23:57:49.728155 kubelet[2772]: I1106 23:57:49.728150 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b3ab03c-4131-4f88-95f4-53bfba6de5d1-config-volume\") pod \"coredns-66bc5c9577-cbfp8\" (UID: \"9b3ab03c-4131-4f88-95f4-53bfba6de5d1\") " pod="kube-system/coredns-66bc5c9577-cbfp8" Nov 6 23:57:49.728335 kubelet[2772]: I1106 23:57:49.728168 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldrp\" (UniqueName: \"kubernetes.io/projected/69c1f7b0-a284-408e-a227-28268b23a6b7-kube-api-access-tldrp\") pod \"coredns-66bc5c9577-46wwn\" (UID: \"69c1f7b0-a284-408e-a227-28268b23a6b7\") " pod="kube-system/coredns-66bc5c9577-46wwn" Nov 6 23:57:49.728335 kubelet[2772]: I1106 23:57:49.728185 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7p6p\" (UniqueName: \"kubernetes.io/projected/9b3ab03c-4131-4f88-95f4-53bfba6de5d1-kube-api-access-n7p6p\") pod \"coredns-66bc5c9577-cbfp8\" (UID: \"9b3ab03c-4131-4f88-95f4-53bfba6de5d1\") " pod="kube-system/coredns-66bc5c9577-cbfp8" Nov 6 23:57:49.950976 kubelet[2772]: E1106 23:57:49.950937 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:49.951593 containerd[1626]: time="2025-11-06T23:57:49.951548868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cbfp8,Uid:9b3ab03c-4131-4f88-95f4-53bfba6de5d1,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:49.961965 kubelet[2772]: E1106 23:57:49.961939 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:49.963144 containerd[1626]: time="2025-11-06T23:57:49.962305278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-46wwn,Uid:69c1f7b0-a284-408e-a227-28268b23a6b7,Namespace:kube-system,Attempt:0,}" Nov 6 23:57:50.329197 kubelet[2772]: E1106 23:57:50.329056 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:50.344401 kubelet[2772]: I1106 23:57:50.344330 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-66695" podStartSLOduration=8.170881252 podStartE2EDuration="24.344310841s" podCreationTimestamp="2025-11-06 23:57:26 +0000 UTC" firstStartedPulling="2025-11-06 23:57:27.446047018 +0000 UTC m=+7.651903351" lastFinishedPulling="2025-11-06 23:57:43.619476597 +0000 UTC m=+23.825332940" observedRunningTime="2025-11-06 23:57:50.343265345 +0000 UTC m=+30.549121708" watchObservedRunningTime="2025-11-06 23:57:50.344310841 +0000 UTC m=+30.550167185" Nov 6 23:57:51.330386 kubelet[2772]: E1106 23:57:51.330345 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:51.561893 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:53670.service - OpenSSH per-connection server daemon (10.0.0.1:53670). Nov 6 23:57:51.620082 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 53670 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:57:51.621400 sshd-session[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:57:51.625812 systemd-logind[1598]: New session 9 of user core. Nov 6 23:57:51.632249 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:57:51.841067 sshd[3583]: Connection closed by 10.0.0.1 port 53670 Nov 6 23:57:51.841396 sshd-session[3580]: pam_unix(sshd:session): session closed for user core Nov 6 23:57:51.846772 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:53670.service: Deactivated successfully. Nov 6 23:57:51.848826 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:57:51.849684 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:57:51.850869 systemd-logind[1598]: Removed session 9. Nov 6 23:57:52.148329 systemd-networkd[1522]: cilium_host: Link UP Nov 6 23:57:52.148567 systemd-networkd[1522]: cilium_net: Link UP Nov 6 23:57:52.148779 systemd-networkd[1522]: cilium_net: Gained carrier Nov 6 23:57:52.148974 systemd-networkd[1522]: cilium_host: Gained carrier Nov 6 23:57:52.254078 systemd-networkd[1522]: cilium_vxlan: Link UP Nov 6 23:57:52.254383 systemd-networkd[1522]: cilium_vxlan: Gained carrier Nov 6 23:57:52.331990 kubelet[2772]: E1106 23:57:52.331953 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:52.454320 systemd-networkd[1522]: cilium_host: Gained IPv6LL Nov 6 23:57:52.479165 kernel: NET: Registered PF_ALG protocol family Nov 6 23:57:52.830382 systemd-networkd[1522]: cilium_net: Gained IPv6LL Nov 6 23:57:53.160282 systemd-networkd[1522]: lxc_health: Link UP Nov 6 23:57:53.161389 systemd-networkd[1522]: lxc_health: Gained carrier Nov 6 23:57:53.334405 kubelet[2772]: E1106 23:57:53.334334 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:53.511172 kernel: eth0: renamed from tmp45ae4 Nov 6 23:57:53.510025 systemd-networkd[1522]: lxc08f931e7df33: Link UP Nov 6 23:57:53.514045 systemd-networkd[1522]: lxc08f931e7df33: Gained carrier Nov 6 23:57:53.522475 systemd-networkd[1522]: lxc99bb3b6c2a79: Link UP Nov 6 23:57:53.534179 kernel: eth0: renamed from tmpd2643 Nov 6 23:57:53.534450 systemd-networkd[1522]: lxc99bb3b6c2a79: Gained carrier Nov 6 23:57:54.238425 systemd-networkd[1522]: cilium_vxlan: Gained IPv6LL Nov 6 23:57:54.336468 kubelet[2772]: E1106 23:57:54.336432 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:54.494328 systemd-networkd[1522]: lxc_health: Gained IPv6LL Nov 6 23:57:54.558294 systemd-networkd[1522]: lxc99bb3b6c2a79: Gained IPv6LL Nov 6 23:57:54.750328 systemd-networkd[1522]: lxc08f931e7df33: Gained IPv6LL Nov 6 23:57:55.338337 kubelet[2772]: E1106 23:57:55.338289 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:56.861195 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:45826.service - OpenSSH per-connection server daemon (10.0.0.1:45826). Nov 6 23:57:56.926915 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 45826 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:57:56.928578 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:57:56.932806 systemd-logind[1598]: New session 10 of user core. Nov 6 23:57:56.947258 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:57:57.030154 containerd[1626]: time="2025-11-06T23:57:57.030087567Z" level=info msg="connecting to shim 45ae49c4c89f8f956ed3a36d746c3d2fb8455139058ec2b7155ac8ca9f263a6a" address="unix:///run/containerd/s/4a87f3be30ebdf4091ddf47e258d7240b39791ce3dc6973a3e6d4313e955dfe2" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:57.039820 containerd[1626]: time="2025-11-06T23:57:57.039699549Z" level=info msg="connecting to shim d2643e1dab4b06d66b7739587dc32848988bcfbdbf9b72a0a32a1a0a86153f6e" address="unix:///run/containerd/s/9d9f14227af66b4dc4bd9dad6cc5a4408f49509061b0a1cedd35ec1c74597b2a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:57:57.072399 systemd[1]: Started cri-containerd-45ae49c4c89f8f956ed3a36d746c3d2fb8455139058ec2b7155ac8ca9f263a6a.scope - libcontainer container 45ae49c4c89f8f956ed3a36d746c3d2fb8455139058ec2b7155ac8ca9f263a6a. Nov 6 23:57:57.076605 systemd[1]: Started cri-containerd-d2643e1dab4b06d66b7739587dc32848988bcfbdbf9b72a0a32a1a0a86153f6e.scope - libcontainer container d2643e1dab4b06d66b7739587dc32848988bcfbdbf9b72a0a32a1a0a86153f6e. Nov 6 23:57:57.096482 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 23:57:57.097467 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 23:57:57.098175 sshd[3985]: Connection closed by 10.0.0.1 port 45826 Nov 6 23:57:57.098581 sshd-session[3982]: pam_unix(sshd:session): session closed for user core Nov 6 23:57:57.104213 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:45826.service: Deactivated successfully. Nov 6 23:57:57.106214 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:57:57.106978 systemd-logind[1598]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:57:57.108452 systemd-logind[1598]: Removed session 10. Nov 6 23:57:57.138530 containerd[1626]: time="2025-11-06T23:57:57.138411643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-46wwn,Uid:69c1f7b0-a284-408e-a227-28268b23a6b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2643e1dab4b06d66b7739587dc32848988bcfbdbf9b72a0a32a1a0a86153f6e\"" Nov 6 23:57:57.139307 kubelet[2772]: E1106 23:57:57.139276 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:57.142583 containerd[1626]: time="2025-11-06T23:57:57.142542935Z" level=info msg="CreateContainer within sandbox \"d2643e1dab4b06d66b7739587dc32848988bcfbdbf9b72a0a32a1a0a86153f6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:57:57.144002 containerd[1626]: time="2025-11-06T23:57:57.143966791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cbfp8,Uid:9b3ab03c-4131-4f88-95f4-53bfba6de5d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ae49c4c89f8f956ed3a36d746c3d2fb8455139058ec2b7155ac8ca9f263a6a\"" Nov 6 23:57:57.144853 kubelet[2772]: E1106 23:57:57.144827 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:57.148341 containerd[1626]: time="2025-11-06T23:57:57.148313498Z" level=info msg="CreateContainer within sandbox \"45ae49c4c89f8f956ed3a36d746c3d2fb8455139058ec2b7155ac8ca9f263a6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:57:57.154376 containerd[1626]: time="2025-11-06T23:57:57.154347065Z" level=info msg="Container dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:57.160771 containerd[1626]: time="2025-11-06T23:57:57.160728395Z" level=info msg="Container 278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:57:57.167160 containerd[1626]: time="2025-11-06T23:57:57.167136435Z" level=info msg="CreateContainer within sandbox \"45ae49c4c89f8f956ed3a36d746c3d2fb8455139058ec2b7155ac8ca9f263a6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575\"" Nov 6 23:57:57.167664 containerd[1626]: time="2025-11-06T23:57:57.167583335Z" level=info msg="StartContainer for \"278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575\"" Nov 6 23:57:57.168231 containerd[1626]: time="2025-11-06T23:57:57.168196957Z" level=info msg="CreateContainer within sandbox \"d2643e1dab4b06d66b7739587dc32848988bcfbdbf9b72a0a32a1a0a86153f6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d\"" Nov 6 23:57:57.168735 containerd[1626]: time="2025-11-06T23:57:57.168702447Z" level=info msg="StartContainer for \"dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d\"" Nov 6 23:57:57.168902 containerd[1626]: time="2025-11-06T23:57:57.168851768Z" level=info msg="connecting to shim 278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575" address="unix:///run/containerd/s/4a87f3be30ebdf4091ddf47e258d7240b39791ce3dc6973a3e6d4313e955dfe2" protocol=ttrpc version=3 Nov 6 23:57:57.170289 containerd[1626]: time="2025-11-06T23:57:57.169398856Z" level=info msg="connecting to shim dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d" address="unix:///run/containerd/s/9d9f14227af66b4dc4bd9dad6cc5a4408f49509061b0a1cedd35ec1c74597b2a" protocol=ttrpc version=3 Nov 6 23:57:57.197316 systemd[1]: Started cri-containerd-278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575.scope - libcontainer container 278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575. Nov 6 23:57:57.198888 systemd[1]: Started cri-containerd-dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d.scope - libcontainer container dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d. Nov 6 23:57:57.228192 containerd[1626]: time="2025-11-06T23:57:57.227926956Z" level=info msg="StartContainer for \"278082666599755b8aae716880045bf6740394ab8bd81a6debf5be3bce217575\" returns successfully" Nov 6 23:57:57.232118 containerd[1626]: time="2025-11-06T23:57:57.232008136Z" level=info msg="StartContainer for \"dca04eeb8d16446a3097b0fc8090653226c5d4c17ace340846997909de33d45d\" returns successfully" Nov 6 23:57:57.345446 kubelet[2772]: E1106 23:57:57.345382 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:57.349065 kubelet[2772]: E1106 23:57:57.348868 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:57.375799 kubelet[2772]: I1106 23:57:57.375656 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-46wwn" podStartSLOduration=30.375638569 podStartE2EDuration="30.375638569s" podCreationTimestamp="2025-11-06 23:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:57:57.374669678 +0000 UTC m=+37.580526021" watchObservedRunningTime="2025-11-06 23:57:57.375638569 +0000 UTC m=+37.581494912" Nov 6 23:57:57.384278 kubelet[2772]: I1106 23:57:57.384223 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cbfp8" podStartSLOduration=30.384202029 podStartE2EDuration="30.384202029s" podCreationTimestamp="2025-11-06 23:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:57:57.383847924 +0000 UTC m=+37.589704267" watchObservedRunningTime="2025-11-06 23:57:57.384202029 +0000 UTC m=+37.590058372" Nov 6 23:57:58.017494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034781708.mount: Deactivated successfully. Nov 6 23:57:58.352066 kubelet[2772]: E1106 23:57:58.351865 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:58.352066 kubelet[2772]: E1106 23:57:58.352054 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:59.353935 kubelet[2772]: E1106 23:57:59.353903 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:57:59.354361 kubelet[2772]: E1106 23:57:59.353903 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:02.122226 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:45834.service - OpenSSH per-connection server daemon (10.0.0.1:45834). Nov 6 23:58:02.178562 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 45834 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:02.180482 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:02.185360 systemd-logind[1598]: New session 11 of user core. Nov 6 23:58:02.199297 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:58:02.313943 sshd[4174]: Connection closed by 10.0.0.1 port 45834 Nov 6 23:58:02.314358 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:02.319681 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:45834.service: Deactivated successfully. Nov 6 23:58:02.321784 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:58:02.322615 systemd-logind[1598]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:58:02.323920 systemd-logind[1598]: Removed session 11. Nov 6 23:58:07.337205 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:44824.service - OpenSSH per-connection server daemon (10.0.0.1:44824). Nov 6 23:58:07.393240 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 44824 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:07.394781 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:07.399134 systemd-logind[1598]: New session 12 of user core. Nov 6 23:58:07.407246 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:58:07.518399 sshd[4191]: Connection closed by 10.0.0.1 port 44824 Nov 6 23:58:07.518758 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:07.531792 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:44824.service: Deactivated successfully. Nov 6 23:58:07.533675 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:58:07.534729 systemd-logind[1598]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:58:07.537786 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:44840.service - OpenSSH per-connection server daemon (10.0.0.1:44840). Nov 6 23:58:07.538608 systemd-logind[1598]: Removed session 12. Nov 6 23:58:07.590076 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 44840 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:07.591723 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:07.595990 systemd-logind[1598]: New session 13 of user core. Nov 6 23:58:07.610346 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:58:07.761969 sshd[4208]: Connection closed by 10.0.0.1 port 44840 Nov 6 23:58:07.762476 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:07.773884 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:44840.service: Deactivated successfully. Nov 6 23:58:07.778730 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:58:07.781562 systemd-logind[1598]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:58:07.790227 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:44856.service - OpenSSH per-connection server daemon (10.0.0.1:44856). Nov 6 23:58:07.791772 systemd-logind[1598]: Removed session 13. Nov 6 23:58:07.858877 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 44856 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:07.860873 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:07.866034 systemd-logind[1598]: New session 14 of user core. Nov 6 23:58:07.885381 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:58:08.009997 sshd[4222]: Connection closed by 10.0.0.1 port 44856 Nov 6 23:58:08.010305 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:08.015085 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:44856.service: Deactivated successfully. Nov 6 23:58:08.017622 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:58:08.019363 systemd-logind[1598]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:58:08.020621 systemd-logind[1598]: Removed session 14. Nov 6 23:58:13.026441 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:35062.service - OpenSSH per-connection server daemon (10.0.0.1:35062). Nov 6 23:58:13.079062 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 35062 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:13.080409 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:13.085070 systemd-logind[1598]: New session 15 of user core. Nov 6 23:58:13.094269 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:58:13.210813 sshd[4238]: Connection closed by 10.0.0.1 port 35062 Nov 6 23:58:13.211154 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:13.216716 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:35062.service: Deactivated successfully. Nov 6 23:58:13.219037 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:58:13.220139 systemd-logind[1598]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:58:13.221765 systemd-logind[1598]: Removed session 15. Nov 6 23:58:18.227930 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:35072.service - OpenSSH per-connection server daemon (10.0.0.1:35072). Nov 6 23:58:18.281040 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 35072 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:18.282908 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:18.287844 systemd-logind[1598]: New session 16 of user core. Nov 6 23:58:18.298308 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:58:18.430557 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:35086.service - OpenSSH per-connection server daemon (10.0.0.1:35086). Nov 6 23:58:18.496840 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 35086 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:18.499013 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:18.504267 systemd-logind[1598]: New session 17 of user core. Nov 6 23:58:18.523483 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:58:18.625903 sshd[4255]: Connection closed by 10.0.0.1 port 35072 Nov 6 23:58:18.626340 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:18.632640 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:35072.service: Deactivated successfully. Nov 6 23:58:18.635425 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:58:18.636273 systemd-logind[1598]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:58:18.637833 systemd-logind[1598]: Removed session 16. Nov 6 23:58:18.726101 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Nov 6 23:58:18.777476 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:18.779001 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:18.783553 systemd-logind[1598]: New session 18 of user core. Nov 6 23:58:18.792254 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:58:18.921939 sshd[4268]: Connection closed by 10.0.0.1 port 35086 Nov 6 23:58:18.922519 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:18.927644 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:35086.service: Deactivated successfully. Nov 6 23:58:18.929997 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:58:18.930927 systemd-logind[1598]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:58:18.932237 systemd-logind[1598]: Removed session 17. Nov 6 23:58:19.430930 sshd[4283]: Connection closed by 10.0.0.1 port 35098 Nov 6 23:58:19.433580 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:19.444192 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:35098.service: Deactivated successfully. Nov 6 23:58:19.448740 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:58:19.449854 systemd-logind[1598]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:58:19.453396 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:35110.service - OpenSSH per-connection server daemon (10.0.0.1:35110). Nov 6 23:58:19.455042 systemd-logind[1598]: Removed session 18. Nov 6 23:58:19.506925 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 35110 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:19.508548 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:19.513244 systemd-logind[1598]: New session 19 of user core. Nov 6 23:58:19.521286 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:58:19.746175 sshd[4307]: Connection closed by 10.0.0.1 port 35110 Nov 6 23:58:19.749246 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:19.757340 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:35110.service: Deactivated successfully. Nov 6 23:58:19.759252 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:58:19.760089 systemd-logind[1598]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:58:19.762793 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:35126.service - OpenSSH per-connection server daemon (10.0.0.1:35126). Nov 6 23:58:19.763479 systemd-logind[1598]: Removed session 19. Nov 6 23:58:19.826832 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 35126 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:19.828809 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:19.833432 systemd-logind[1598]: New session 20 of user core. Nov 6 23:58:19.841333 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:58:19.962450 sshd[4322]: Connection closed by 10.0.0.1 port 35126 Nov 6 23:58:19.962795 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:19.966604 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:35126.service: Deactivated successfully. Nov 6 23:58:19.968645 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:58:19.970135 systemd-logind[1598]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:58:19.971647 systemd-logind[1598]: Removed session 20. Nov 6 23:58:24.974877 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:54878.service - OpenSSH per-connection server daemon (10.0.0.1:54878). Nov 6 23:58:25.031770 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 54878 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:25.033173 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:25.037817 systemd-logind[1598]: New session 21 of user core. Nov 6 23:58:25.046270 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:58:25.165104 sshd[4343]: Connection closed by 10.0.0.1 port 54878 Nov 6 23:58:25.165449 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:25.169332 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:54878.service: Deactivated successfully. Nov 6 23:58:25.171106 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:58:25.171895 systemd-logind[1598]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:58:25.172888 systemd-logind[1598]: Removed session 21. Nov 6 23:58:30.190708 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:54886.service - OpenSSH per-connection server daemon (10.0.0.1:54886). Nov 6 23:58:30.239794 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 54886 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:30.241930 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:30.247406 systemd-logind[1598]: New session 22 of user core. Nov 6 23:58:30.264395 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:58:30.372675 sshd[4363]: Connection closed by 10.0.0.1 port 54886 Nov 6 23:58:30.373001 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:30.376809 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:54886.service: Deactivated successfully. Nov 6 23:58:30.378831 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:58:30.380518 systemd-logind[1598]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:58:30.382179 systemd-logind[1598]: Removed session 22. Nov 6 23:58:31.884362 kubelet[2772]: E1106 23:58:31.884292 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:35.386200 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:40830.service - OpenSSH per-connection server daemon (10.0.0.1:40830). Nov 6 23:58:35.437524 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:35.439305 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:35.443885 systemd-logind[1598]: New session 23 of user core. Nov 6 23:58:35.451262 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:58:35.561644 sshd[4379]: Connection closed by 10.0.0.1 port 40830 Nov 6 23:58:35.561985 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:35.568437 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:40830.service: Deactivated successfully. Nov 6 23:58:35.570897 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:58:35.571848 systemd-logind[1598]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:58:35.574352 systemd-logind[1598]: Removed session 23. Nov 6 23:58:35.883563 kubelet[2772]: E1106 23:58:35.883453 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:37.883954 kubelet[2772]: E1106 23:58:37.883921 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:38.883165 kubelet[2772]: E1106 23:58:38.883077 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:40.579911 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:40838.service - OpenSSH per-connection server daemon (10.0.0.1:40838). Nov 6 23:58:40.641463 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 40838 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:40.643279 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:40.647814 systemd-logind[1598]: New session 24 of user core. Nov 6 23:58:40.663208 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:58:40.775055 sshd[4395]: Connection closed by 10.0.0.1 port 40838 Nov 6 23:58:40.775506 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:40.788873 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:40838.service: Deactivated successfully. Nov 6 23:58:40.790713 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:58:40.791655 systemd-logind[1598]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:58:40.794888 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:40850.service - OpenSSH per-connection server daemon (10.0.0.1:40850). Nov 6 23:58:40.795654 systemd-logind[1598]: Removed session 24. Nov 6 23:58:40.848240 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 40850 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:40.849491 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:40.854950 systemd-logind[1598]: New session 25 of user core. Nov 6 23:58:40.866305 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:58:42.271508 containerd[1626]: time="2025-11-06T23:58:42.271435290Z" level=info msg="StopContainer for \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" with timeout 30 (s)" Nov 6 23:58:42.279853 containerd[1626]: time="2025-11-06T23:58:42.279799471Z" level=info msg="Stop container \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" with signal terminated" Nov 6 23:58:42.294632 systemd[1]: cri-containerd-b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb.scope: Deactivated successfully. Nov 6 23:58:42.296537 containerd[1626]: time="2025-11-06T23:58:42.296496628Z" level=info msg="received exit event container_id:\"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" id:\"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" pid:3383 exited_at:{seconds:1762473522 nanos:296272128}" Nov 6 23:58:42.296767 containerd[1626]: time="2025-11-06T23:58:42.296717320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" id:\"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" pid:3383 exited_at:{seconds:1762473522 nanos:296272128}" Nov 6 23:58:42.303564 containerd[1626]: time="2025-11-06T23:58:42.303517489Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:58:42.305820 containerd[1626]: time="2025-11-06T23:58:42.305769958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" id:\"ece8f439dc1564b84a94d2499477f3d31fcd96f1aa3e43121bc0f336a8896383\" pid:4433 exited_at:{seconds:1762473522 nanos:305016721}" Nov 6 23:58:42.308384 containerd[1626]: time="2025-11-06T23:58:42.308356202Z" level=info msg="StopContainer for \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" with timeout 2 (s)" Nov 6 23:58:42.308654 containerd[1626]: time="2025-11-06T23:58:42.308626967Z" level=info msg="Stop container \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" with signal terminated" Nov 6 23:58:42.316342 systemd-networkd[1522]: lxc_health: Link DOWN Nov 6 23:58:42.316354 systemd-networkd[1522]: lxc_health: Lost carrier Nov 6 23:58:42.329457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb-rootfs.mount: Deactivated successfully. Nov 6 23:58:42.342044 systemd[1]: cri-containerd-40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c.scope: Deactivated successfully. Nov 6 23:58:42.342464 systemd[1]: cri-containerd-40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c.scope: Consumed 6.637s CPU time, 123.1M memory peak, 208K read from disk, 13.3M written to disk. Nov 6 23:58:42.343199 containerd[1626]: time="2025-11-06T23:58:42.343161159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" id:\"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" pid:3454 exited_at:{seconds:1762473522 nanos:342896204}" Nov 6 23:58:42.343199 containerd[1626]: time="2025-11-06T23:58:42.343187258Z" level=info msg="received exit event container_id:\"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" id:\"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" pid:3454 exited_at:{seconds:1762473522 nanos:342896204}" Nov 6 23:58:42.345806 containerd[1626]: time="2025-11-06T23:58:42.345782308Z" level=info msg="StopContainer for \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" returns successfully" Nov 6 23:58:42.348283 containerd[1626]: time="2025-11-06T23:58:42.348255390Z" level=info msg="StopPodSandbox for \"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\"" Nov 6 23:58:42.358896 containerd[1626]: time="2025-11-06T23:58:42.358550691Z" level=info msg="Container to stop \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:58:42.365259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c-rootfs.mount: Deactivated successfully. Nov 6 23:58:42.367644 systemd[1]: cri-containerd-b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355.scope: Deactivated successfully. Nov 6 23:58:42.370204 containerd[1626]: time="2025-11-06T23:58:42.370119001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" id:\"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" pid:2967 exit_status:137 exited_at:{seconds:1762473522 nanos:369796798}" Nov 6 23:58:42.398177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355-rootfs.mount: Deactivated successfully. Nov 6 23:58:42.411060 containerd[1626]: time="2025-11-06T23:58:42.410844113Z" level=info msg="shim disconnected" id=b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355 namespace=k8s.io Nov 6 23:58:42.411060 containerd[1626]: time="2025-11-06T23:58:42.411045809Z" level=warning msg="cleaning up after shim disconnected" id=b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355 namespace=k8s.io Nov 6 23:58:42.429038 containerd[1626]: time="2025-11-06T23:58:42.411055557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:58:42.429168 containerd[1626]: time="2025-11-06T23:58:42.410992670Z" level=info msg="StopContainer for \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" returns successfully" Nov 6 23:58:42.429683 containerd[1626]: time="2025-11-06T23:58:42.429645981Z" level=info msg="StopPodSandbox for \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\"" Nov 6 23:58:42.429827 containerd[1626]: time="2025-11-06T23:58:42.429713237Z" level=info msg="Container to stop \"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:58:42.429827 containerd[1626]: time="2025-11-06T23:58:42.429728335Z" level=info msg="Container to stop \"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:58:42.429827 containerd[1626]: time="2025-11-06T23:58:42.429736700Z" level=info msg="Container to stop \"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:58:42.429827 containerd[1626]: time="2025-11-06T23:58:42.429744756Z" level=info msg="Container to stop \"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:58:42.429827 containerd[1626]: time="2025-11-06T23:58:42.429752260Z" level=info msg="Container to stop \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:58:42.438321 systemd[1]: cri-containerd-bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4.scope: Deactivated successfully. Nov 6 23:58:42.463211 containerd[1626]: time="2025-11-06T23:58:42.463153895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" id:\"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" pid:2954 exit_status:137 exited_at:{seconds:1762473522 nanos:445392831}" Nov 6 23:58:42.465310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355-shm.mount: Deactivated successfully. Nov 6 23:58:42.467635 containerd[1626]: time="2025-11-06T23:58:42.467603480Z" level=info msg="received exit event sandbox_id:\"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" exit_status:137 exited_at:{seconds:1762473522 nanos:369796798}" Nov 6 23:58:42.470357 containerd[1626]: time="2025-11-06T23:58:42.470332251Z" level=info msg="TearDown network for sandbox \"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" successfully" Nov 6 23:58:42.470357 containerd[1626]: time="2025-11-06T23:58:42.470352739Z" level=info msg="StopPodSandbox for \"b030cd59e31d80d1c572937ab44a3713592aa99ea51fca3c2e2a79478fe60355\" returns successfully" Nov 6 23:58:42.479003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4-rootfs.mount: Deactivated successfully. Nov 6 23:58:42.481136 containerd[1626]: time="2025-11-06T23:58:42.481091318Z" level=info msg="shim disconnected" id=bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4 namespace=k8s.io Nov 6 23:58:42.481483 containerd[1626]: time="2025-11-06T23:58:42.481304797Z" level=warning msg="cleaning up after shim disconnected" id=bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4 namespace=k8s.io Nov 6 23:58:42.481611 containerd[1626]: time="2025-11-06T23:58:42.481457032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:58:42.482532 containerd[1626]: time="2025-11-06T23:58:42.482507254Z" level=info msg="TearDown network for sandbox \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" successfully" Nov 6 23:58:42.482532 containerd[1626]: time="2025-11-06T23:58:42.482530848Z" level=info msg="StopPodSandbox for \"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" returns successfully" Nov 6 23:58:42.483011 containerd[1626]: time="2025-11-06T23:58:42.482969198Z" level=info msg="received exit event sandbox_id:\"bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4\" exit_status:137 exited_at:{seconds:1762473522 nanos:445392831}" Nov 6 23:58:42.532779 kubelet[2772]: I1106 23:58:42.532651 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr6n6\" (UniqueName: \"kubernetes.io/projected/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-kube-api-access-lr6n6\") pod \"ae6ccc5f-38de-460a-a9fd-ba4749d438b4\" (UID: \"ae6ccc5f-38de-460a-a9fd-ba4749d438b4\") " Nov 6 23:58:42.532779 kubelet[2772]: I1106 23:58:42.532692 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-cilium-config-path\") pod \"ae6ccc5f-38de-460a-a9fd-ba4749d438b4\" (UID: \"ae6ccc5f-38de-460a-a9fd-ba4749d438b4\") " Nov 6 23:58:42.535995 kubelet[2772]: I1106 23:58:42.535961 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae6ccc5f-38de-460a-a9fd-ba4749d438b4" (UID: "ae6ccc5f-38de-460a-a9fd-ba4749d438b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:58:42.538746 kubelet[2772]: I1106 23:58:42.538690 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-kube-api-access-lr6n6" (OuterVolumeSpecName: "kube-api-access-lr6n6") pod "ae6ccc5f-38de-460a-a9fd-ba4749d438b4" (UID: "ae6ccc5f-38de-460a-a9fd-ba4749d438b4"). InnerVolumeSpecName "kube-api-access-lr6n6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:58:42.633823 kubelet[2772]: I1106 23:58:42.633782 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-config-path\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.633823 kubelet[2772]: I1106 23:58:42.633822 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-run\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634050 kubelet[2772]: I1106 23:58:42.633835 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-lib-modules\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634050 kubelet[2772]: I1106 23:58:42.633855 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hostproc\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634050 kubelet[2772]: I1106 23:58:42.633870 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-net\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634050 kubelet[2772]: I1106 23:58:42.633884 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-kernel\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634050 kubelet[2772]: I1106 23:58:42.633901 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-xtables-lock\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634050 kubelet[2772]: I1106 23:58:42.633972 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87bdf9df-e0e0-46c1-90b9-d40af36c1376-clustermesh-secrets\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634206 kubelet[2772]: I1106 23:58:42.633965 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hostproc" (OuterVolumeSpecName: "hostproc") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634206 kubelet[2772]: I1106 23:58:42.634023 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cni-path" (OuterVolumeSpecName: "cni-path") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634206 kubelet[2772]: I1106 23:58:42.634039 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634206 kubelet[2772]: I1106 23:58:42.633992 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cni-path\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634206 kubelet[2772]: I1106 23:58:42.634082 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-etc-cni-netd\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634322 kubelet[2772]: I1106 23:58:42.634105 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-bpf-maps\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634322 kubelet[2772]: I1106 23:58:42.634118 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-cgroup\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634322 kubelet[2772]: I1106 23:58:42.634041 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634322 kubelet[2772]: I1106 23:58:42.633969 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634322 kubelet[2772]: I1106 23:58:42.634022 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634427 kubelet[2772]: I1106 23:58:42.634058 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634427 kubelet[2772]: I1106 23:58:42.634146 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.634427 kubelet[2772]: I1106 23:58:42.634164 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j65dc\" (UniqueName: \"kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-kube-api-access-j65dc\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634427 kubelet[2772]: I1106 23:58:42.634194 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hubble-tls\") pod \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\" (UID: \"87bdf9df-e0e0-46c1-90b9-d40af36c1376\") " Nov 6 23:58:42.634427 kubelet[2772]: I1106 23:58:42.634226 2772 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634427 kubelet[2772]: I1106 23:58:42.634234 2772 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634245 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lr6n6\" (UniqueName: \"kubernetes.io/projected/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-kube-api-access-lr6n6\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634253 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634261 2772 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634267 2772 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634276 2772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634283 2772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634291 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae6ccc5f-38de-460a-a9fd-ba4749d438b4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.634555 kubelet[2772]: I1106 23:58:42.634299 2772 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.635312 kubelet[2772]: I1106 23:58:42.635257 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.635312 kubelet[2772]: I1106 23:58:42.635281 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:58:42.637484 kubelet[2772]: I1106 23:58:42.637452 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-kube-api-access-j65dc" (OuterVolumeSpecName: "kube-api-access-j65dc") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "kube-api-access-j65dc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:58:42.637671 kubelet[2772]: I1106 23:58:42.637483 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:58:42.637909 kubelet[2772]: I1106 23:58:42.637887 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bdf9df-e0e0-46c1-90b9-d40af36c1376-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:58:42.638412 kubelet[2772]: I1106 23:58:42.638382 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87bdf9df-e0e0-46c1-90b9-d40af36c1376" (UID: "87bdf9df-e0e0-46c1-90b9-d40af36c1376"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:58:42.734990 kubelet[2772]: I1106 23:58:42.734932 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.734990 kubelet[2772]: I1106 23:58:42.734963 2772 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87bdf9df-e0e0-46c1-90b9-d40af36c1376-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.734990 kubelet[2772]: I1106 23:58:42.734973 2772 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.734990 kubelet[2772]: I1106 23:58:42.734981 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87bdf9df-e0e0-46c1-90b9-d40af36c1376-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.734990 kubelet[2772]: I1106 23:58:42.734992 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j65dc\" (UniqueName: \"kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-kube-api-access-j65dc\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:42.734990 kubelet[2772]: I1106 23:58:42.734999 2772 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87bdf9df-e0e0-46c1-90b9-d40af36c1376-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 6 23:58:43.327098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbf5e3078ef8fe663d49e7b75ffd21d2467847ac5694515a7f1f57924cf92de4-shm.mount: Deactivated successfully. Nov 6 23:58:43.327237 systemd[1]: var-lib-kubelet-pods-ae6ccc5f\x2d38de\x2d460a\x2da9fd\x2dba4749d438b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlr6n6.mount: Deactivated successfully. Nov 6 23:58:43.327341 systemd[1]: var-lib-kubelet-pods-87bdf9df\x2de0e0\x2d46c1\x2d90b9\x2dd40af36c1376-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj65dc.mount: Deactivated successfully. Nov 6 23:58:43.327436 systemd[1]: var-lib-kubelet-pods-87bdf9df\x2de0e0\x2d46c1\x2d90b9\x2dd40af36c1376-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:58:43.327540 systemd[1]: var-lib-kubelet-pods-87bdf9df\x2de0e0\x2d46c1\x2d90b9\x2dd40af36c1376-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:58:43.453970 kubelet[2772]: I1106 23:58:43.453570 2772 scope.go:117] "RemoveContainer" containerID="b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb" Nov 6 23:58:43.456036 containerd[1626]: time="2025-11-06T23:58:43.455083670Z" level=info msg="RemoveContainer for \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\"" Nov 6 23:58:43.460653 containerd[1626]: time="2025-11-06T23:58:43.460602485Z" level=info msg="RemoveContainer for \"b8a73d63dbdc098096ffa8530f180dc916e0841db1d7ac31019d94297e447ffb\" returns successfully" Nov 6 23:58:43.460758 systemd[1]: Removed slice kubepods-besteffort-podae6ccc5f_38de_460a_a9fd_ba4749d438b4.slice - libcontainer container kubepods-besteffort-podae6ccc5f_38de_460a_a9fd_ba4749d438b4.slice. Nov 6 23:58:43.460874 kubelet[2772]: I1106 23:58:43.460828 2772 scope.go:117] "RemoveContainer" containerID="40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c" Nov 6 23:58:43.463148 containerd[1626]: time="2025-11-06T23:58:43.462917653Z" level=info msg="RemoveContainer for \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\"" Nov 6 23:58:43.465699 systemd[1]: Removed slice kubepods-burstable-pod87bdf9df_e0e0_46c1_90b9_d40af36c1376.slice - libcontainer container kubepods-burstable-pod87bdf9df_e0e0_46c1_90b9_d40af36c1376.slice. Nov 6 23:58:43.465812 systemd[1]: kubepods-burstable-pod87bdf9df_e0e0_46c1_90b9_d40af36c1376.slice: Consumed 6.759s CPU time, 123.4M memory peak, 224K read from disk, 13.3M written to disk. Nov 6 23:58:43.483116 containerd[1626]: time="2025-11-06T23:58:43.483060772Z" level=info msg="RemoveContainer for \"40af1a16a0540d1445e62dc776635e82be8856764608535ce4e733122271269c\" returns successfully" Nov 6 23:58:43.483407 kubelet[2772]: I1106 23:58:43.483379 2772 scope.go:117] "RemoveContainer" containerID="5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7" Nov 6 23:58:43.485453 containerd[1626]: time="2025-11-06T23:58:43.485425411Z" level=info msg="RemoveContainer for \"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\"" Nov 6 23:58:43.489633 containerd[1626]: time="2025-11-06T23:58:43.489607608Z" level=info msg="RemoveContainer for \"5bf0e837b5f089ea52f5dc9ff688ae7ea410d7640890037860cbc360dfe38bd7\" returns successfully" Nov 6 23:58:43.489796 kubelet[2772]: I1106 23:58:43.489762 2772 scope.go:117] "RemoveContainer" containerID="4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a" Nov 6 23:58:43.491814 containerd[1626]: time="2025-11-06T23:58:43.491794315Z" level=info msg="RemoveContainer for \"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\"" Nov 6 23:58:43.495762 containerd[1626]: time="2025-11-06T23:58:43.495735220Z" level=info msg="RemoveContainer for \"4701000b0ce2f6565c57c9005cc73fe6738e7523a32817abb7da0b6bd749589a\" returns successfully" Nov 6 23:58:43.495871 kubelet[2772]: I1106 23:58:43.495847 2772 scope.go:117] "RemoveContainer" containerID="987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018" Nov 6 23:58:43.496892 containerd[1626]: time="2025-11-06T23:58:43.496874140Z" level=info msg="RemoveContainer for \"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\"" Nov 6 23:58:43.500262 containerd[1626]: time="2025-11-06T23:58:43.500230553Z" level=info msg="RemoveContainer for \"987b66950c06a3ff5221ef4c3bf1740aa5f3dc28a439a752c07ba8b144113018\" returns successfully" Nov 6 23:58:43.500377 kubelet[2772]: I1106 23:58:43.500342 2772 scope.go:117] "RemoveContainer" containerID="f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3" Nov 6 23:58:43.501390 containerd[1626]: time="2025-11-06T23:58:43.501368069Z" level=info msg="RemoveContainer for \"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\"" Nov 6 23:58:43.504702 containerd[1626]: time="2025-11-06T23:58:43.504685069Z" level=info msg="RemoveContainer for \"f9c05f3d9b05f3358f462f7d63d7617b3e3c9c9390ecbab7bf1c2092e81c3dd3\" returns successfully" Nov 6 23:58:43.885826 kubelet[2772]: I1106 23:58:43.885770 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bdf9df-e0e0-46c1-90b9-d40af36c1376" path="/var/lib/kubelet/pods/87bdf9df-e0e0-46c1-90b9-d40af36c1376/volumes" Nov 6 23:58:43.886706 kubelet[2772]: I1106 23:58:43.886677 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6ccc5f-38de-460a-a9fd-ba4749d438b4" path="/var/lib/kubelet/pods/ae6ccc5f-38de-460a-a9fd-ba4749d438b4/volumes" Nov 6 23:58:44.358264 sshd[4411]: Connection closed by 10.0.0.1 port 40850 Nov 6 23:58:44.358953 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:44.373497 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:40850.service: Deactivated successfully. Nov 6 23:58:44.376037 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:58:44.377077 systemd-logind[1598]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:58:44.380418 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Nov 6 23:58:44.381098 systemd-logind[1598]: Removed session 25. Nov 6 23:58:44.437877 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:44.439814 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:44.445308 systemd-logind[1598]: New session 26 of user core. Nov 6 23:58:44.455329 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:58:44.939004 kubelet[2772]: E1106 23:58:44.938957 2772 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:58:44.954274 sshd[4568]: Connection closed by 10.0.0.1 port 47746 Nov 6 23:58:44.954547 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:44.964186 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:47746.service: Deactivated successfully. Nov 6 23:58:44.966072 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:58:44.973758 systemd-logind[1598]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:58:44.978457 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:47748.service - OpenSSH per-connection server daemon (10.0.0.1:47748). Nov 6 23:58:44.987226 systemd-logind[1598]: Removed session 26. Nov 6 23:58:44.991607 systemd[1]: Created slice kubepods-burstable-podad271f82_4ddd_449a_acb9_68305f02a9ad.slice - libcontainer container kubepods-burstable-podad271f82_4ddd_449a_acb9_68305f02a9ad.slice. Nov 6 23:58:45.032111 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:45.034101 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:45.044770 systemd-logind[1598]: New session 27 of user core. Nov 6 23:58:45.048385 kubelet[2772]: I1106 23:58:45.048301 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ad271f82-4ddd-449a-acb9-68305f02a9ad-cilium-ipsec-secrets\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.048767 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 23:58:45.049365 kubelet[2772]: I1106 23:58:45.049347 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-host-proc-sys-net\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.049544 kubelet[2772]: I1106 23:58:45.049529 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-cilium-run\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.051346 kubelet[2772]: I1106 23:58:45.051330 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-host-proc-sys-kernel\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.051661 kubelet[2772]: I1106 23:58:45.051647 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-hostproc\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.051973 kubelet[2772]: I1106 23:58:45.051901 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-etc-cni-netd\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.051973 kubelet[2772]: I1106 23:58:45.051921 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-cilium-cgroup\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.052510 kubelet[2772]: I1106 23:58:45.051936 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad271f82-4ddd-449a-acb9-68305f02a9ad-clustermesh-secrets\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.052510 kubelet[2772]: I1106 23:58:45.052481 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad271f82-4ddd-449a-acb9-68305f02a9ad-cilium-config-path\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.053716 kubelet[2772]: I1106 23:58:45.052494 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad271f82-4ddd-449a-acb9-68305f02a9ad-hubble-tls\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.053716 kubelet[2772]: I1106 23:58:45.053177 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljh7\" (UniqueName: \"kubernetes.io/projected/ad271f82-4ddd-449a-acb9-68305f02a9ad-kube-api-access-lljh7\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.053716 kubelet[2772]: I1106 23:58:45.053196 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-lib-modules\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.053716 kubelet[2772]: I1106 23:58:45.053209 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-bpf-maps\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.053716 kubelet[2772]: I1106 23:58:45.053255 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-cni-path\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.053716 kubelet[2772]: I1106 23:58:45.053274 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad271f82-4ddd-449a-acb9-68305f02a9ad-xtables-lock\") pod \"cilium-f48rl\" (UID: \"ad271f82-4ddd-449a-acb9-68305f02a9ad\") " pod="kube-system/cilium-f48rl" Nov 6 23:58:45.105691 sshd[4583]: Connection closed by 10.0.0.1 port 47748 Nov 6 23:58:45.106043 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Nov 6 23:58:45.124750 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:47748.service: Deactivated successfully. Nov 6 23:58:45.126547 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 23:58:45.127406 systemd-logind[1598]: Session 27 logged out. Waiting for processes to exit. Nov 6 23:58:45.130306 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:47760.service - OpenSSH per-connection server daemon (10.0.0.1:47760). Nov 6 23:58:45.130930 systemd-logind[1598]: Removed session 27. Nov 6 23:58:45.183035 sshd[4591]: Accepted publickey for core from 10.0.0.1 port 47760 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 6 23:58:45.184661 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:58:45.188847 systemd-logind[1598]: New session 28 of user core. Nov 6 23:58:45.197392 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 23:58:45.301022 kubelet[2772]: E1106 23:58:45.300117 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:45.301421 containerd[1626]: time="2025-11-06T23:58:45.301388013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f48rl,Uid:ad271f82-4ddd-449a-acb9-68305f02a9ad,Namespace:kube-system,Attempt:0,}" Nov 6 23:58:45.322815 containerd[1626]: time="2025-11-06T23:58:45.322760687Z" level=info msg="connecting to shim 4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a" address="unix:///run/containerd/s/127ce21123ffc055c21c8cf601f57758bf06ead181c1f42c8457875105dfa9d4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 23:58:45.353311 systemd[1]: Started cri-containerd-4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a.scope - libcontainer container 4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a. Nov 6 23:58:45.377353 containerd[1626]: time="2025-11-06T23:58:45.377303756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f48rl,Uid:ad271f82-4ddd-449a-acb9-68305f02a9ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\"" Nov 6 23:58:45.377999 kubelet[2772]: E1106 23:58:45.377975 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:45.382312 containerd[1626]: time="2025-11-06T23:58:45.382269329Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:58:45.389084 containerd[1626]: time="2025-11-06T23:58:45.389058751Z" level=info msg="Container 2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:58:45.396935 containerd[1626]: time="2025-11-06T23:58:45.396898045Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\"" Nov 6 23:58:45.397396 containerd[1626]: time="2025-11-06T23:58:45.397364007Z" level=info msg="StartContainer for \"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\"" Nov 6 23:58:45.398550 containerd[1626]: time="2025-11-06T23:58:45.398519977Z" level=info msg="connecting to shim 2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442" address="unix:///run/containerd/s/127ce21123ffc055c21c8cf601f57758bf06ead181c1f42c8457875105dfa9d4" protocol=ttrpc version=3 Nov 6 23:58:45.422251 systemd[1]: Started cri-containerd-2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442.scope - libcontainer container 2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442. Nov 6 23:58:45.449307 containerd[1626]: time="2025-11-06T23:58:45.449193343Z" level=info msg="StartContainer for \"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\" returns successfully" Nov 6 23:58:45.458675 systemd[1]: cri-containerd-2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442.scope: Deactivated successfully. Nov 6 23:58:45.459800 containerd[1626]: time="2025-11-06T23:58:45.459770927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\" id:\"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\" pid:4665 exited_at:{seconds:1762473525 nanos:459436491}" Nov 6 23:58:45.459862 containerd[1626]: time="2025-11-06T23:58:45.459832862Z" level=info msg="received exit event container_id:\"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\" id:\"2d2de4a99823ee170f0b9952265414e06586e6680da2cadc9827f22842603442\" pid:4665 exited_at:{seconds:1762473525 nanos:459436491}" Nov 6 23:58:45.466606 kubelet[2772]: E1106 23:58:45.466576 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:46.470314 kubelet[2772]: E1106 23:58:46.470277 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:46.616582 containerd[1626]: time="2025-11-06T23:58:46.616519804Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:58:46.635509 containerd[1626]: time="2025-11-06T23:58:46.635467417Z" level=info msg="Container f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:58:46.642913 containerd[1626]: time="2025-11-06T23:58:46.642875467Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\"" Nov 6 23:58:46.643444 containerd[1626]: time="2025-11-06T23:58:46.643400869Z" level=info msg="StartContainer for \"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\"" Nov 6 23:58:46.644406 containerd[1626]: time="2025-11-06T23:58:46.644380360Z" level=info msg="connecting to shim f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c" address="unix:///run/containerd/s/127ce21123ffc055c21c8cf601f57758bf06ead181c1f42c8457875105dfa9d4" protocol=ttrpc version=3 Nov 6 23:58:46.668285 systemd[1]: Started cri-containerd-f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c.scope - libcontainer container f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c. Nov 6 23:58:46.696917 containerd[1626]: time="2025-11-06T23:58:46.696868607Z" level=info msg="StartContainer for \"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\" returns successfully" Nov 6 23:58:46.704330 systemd[1]: cri-containerd-f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c.scope: Deactivated successfully. Nov 6 23:58:46.705599 containerd[1626]: time="2025-11-06T23:58:46.705522696Z" level=info msg="received exit event container_id:\"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\" id:\"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\" pid:4711 exited_at:{seconds:1762473526 nanos:704963149}" Nov 6 23:58:46.705714 containerd[1626]: time="2025-11-06T23:58:46.705671824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\" id:\"f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c\" pid:4711 exited_at:{seconds:1762473526 nanos:704963149}" Nov 6 23:58:46.725505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6cdd435ee2423ddd7434d817bbcccb73e16d42b8eeeeefc8c80b2fd70cb6a0c-rootfs.mount: Deactivated successfully. Nov 6 23:58:47.473346 kubelet[2772]: E1106 23:58:47.473293 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:47.478047 containerd[1626]: time="2025-11-06T23:58:47.478006472Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:58:47.490001 containerd[1626]: time="2025-11-06T23:58:47.489944632Z" level=info msg="Container d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:58:47.498860 containerd[1626]: time="2025-11-06T23:58:47.498813844Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\"" Nov 6 23:58:47.499335 containerd[1626]: time="2025-11-06T23:58:47.499263535Z" level=info msg="StartContainer for \"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\"" Nov 6 23:58:47.500458 containerd[1626]: time="2025-11-06T23:58:47.500436378Z" level=info msg="connecting to shim d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9" address="unix:///run/containerd/s/127ce21123ffc055c21c8cf601f57758bf06ead181c1f42c8457875105dfa9d4" protocol=ttrpc version=3 Nov 6 23:58:47.526365 systemd[1]: Started cri-containerd-d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9.scope - libcontainer container d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9. Nov 6 23:58:47.565306 containerd[1626]: time="2025-11-06T23:58:47.565268169Z" level=info msg="StartContainer for \"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\" returns successfully" Nov 6 23:58:47.566069 systemd[1]: cri-containerd-d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9.scope: Deactivated successfully. Nov 6 23:58:47.568108 containerd[1626]: time="2025-11-06T23:58:47.568071590Z" level=info msg="received exit event container_id:\"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\" id:\"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\" pid:4755 exited_at:{seconds:1762473527 nanos:567881004}" Nov 6 23:58:47.568285 containerd[1626]: time="2025-11-06T23:58:47.568261044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\" id:\"d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9\" pid:4755 exited_at:{seconds:1762473527 nanos:567881004}" Nov 6 23:58:47.591016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d20dbc8b5da6e04d1d8cbb356d08e38f28f0ebac07a19b3af8321fbedcca7fc9-rootfs.mount: Deactivated successfully. Nov 6 23:58:48.478423 kubelet[2772]: E1106 23:58:48.478382 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:48.482634 containerd[1626]: time="2025-11-06T23:58:48.482584702Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:58:48.501349 containerd[1626]: time="2025-11-06T23:58:48.501308201Z" level=info msg="Container dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:58:48.508072 containerd[1626]: time="2025-11-06T23:58:48.508027203Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\"" Nov 6 23:58:48.508624 containerd[1626]: time="2025-11-06T23:58:48.508545553Z" level=info msg="StartContainer for \"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\"" Nov 6 23:58:48.509513 containerd[1626]: time="2025-11-06T23:58:48.509487624Z" level=info msg="connecting to shim dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff" address="unix:///run/containerd/s/127ce21123ffc055c21c8cf601f57758bf06ead181c1f42c8457875105dfa9d4" protocol=ttrpc version=3 Nov 6 23:58:48.531274 systemd[1]: Started cri-containerd-dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff.scope - libcontainer container dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff. Nov 6 23:58:48.556653 systemd[1]: cri-containerd-dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff.scope: Deactivated successfully. Nov 6 23:58:48.557294 containerd[1626]: time="2025-11-06T23:58:48.557246692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\" id:\"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\" pid:4794 exited_at:{seconds:1762473528 nanos:556846844}" Nov 6 23:58:48.557615 containerd[1626]: time="2025-11-06T23:58:48.557591186Z" level=info msg="received exit event container_id:\"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\" id:\"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\" pid:4794 exited_at:{seconds:1762473528 nanos:556846844}" Nov 6 23:58:48.564533 containerd[1626]: time="2025-11-06T23:58:48.564507649Z" level=info msg="StartContainer for \"dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff\" returns successfully" Nov 6 23:58:49.483604 kubelet[2772]: E1106 23:58:49.483571 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:49.491751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcc141683e696f005696549c9296da649a92132a46ce18c115d57bda6faa6dff-rootfs.mount: Deactivated successfully. Nov 6 23:58:49.530355 containerd[1626]: time="2025-11-06T23:58:49.530309492Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:58:49.623936 containerd[1626]: time="2025-11-06T23:58:49.623884987Z" level=info msg="Container 8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925: CDI devices from CRI Config.CDIDevices: []" Nov 6 23:58:49.631318 containerd[1626]: time="2025-11-06T23:58:49.631285936Z" level=info msg="CreateContainer within sandbox \"4c83afd62d3e127e21576fa2e1b5a3eaa239af066258bb116f3caa686af41d3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\"" Nov 6 23:58:49.631944 containerd[1626]: time="2025-11-06T23:58:49.631892230Z" level=info msg="StartContainer for \"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\"" Nov 6 23:58:49.633276 containerd[1626]: time="2025-11-06T23:58:49.633240641Z" level=info msg="connecting to shim 8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925" address="unix:///run/containerd/s/127ce21123ffc055c21c8cf601f57758bf06ead181c1f42c8457875105dfa9d4" protocol=ttrpc version=3 Nov 6 23:58:49.659288 systemd[1]: Started cri-containerd-8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925.scope - libcontainer container 8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925. Nov 6 23:58:49.695108 containerd[1626]: time="2025-11-06T23:58:49.695066688Z" level=info msg="StartContainer for \"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" returns successfully" Nov 6 23:58:49.766763 containerd[1626]: time="2025-11-06T23:58:49.766637768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" id:\"ca2c903060955547335f4ac74be58c9123d73e8970c5aeafe977076d978ccbf8\" pid:4861 exited_at:{seconds:1762473529 nanos:766275380}" Nov 6 23:58:50.111165 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 6 23:58:50.497763 kubelet[2772]: E1106 23:58:50.497722 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:50.511033 kubelet[2772]: I1106 23:58:50.510964 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f48rl" podStartSLOduration=6.510947795 podStartE2EDuration="6.510947795s" podCreationTimestamp="2025-11-06 23:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:58:50.510711202 +0000 UTC m=+90.716567545" watchObservedRunningTime="2025-11-06 23:58:50.510947795 +0000 UTC m=+90.716804138" Nov 6 23:58:51.500252 kubelet[2772]: E1106 23:58:51.500223 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:51.502297 containerd[1626]: time="2025-11-06T23:58:51.502181239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" id:\"5853cd9ed7caa529778a32c32a4050135474610e7b1bfacd8b4fb5cb4691980d\" pid:4969 exit_status:1 exited_at:{seconds:1762473531 nanos:501740695}" Nov 6 23:58:53.220506 systemd-networkd[1522]: lxc_health: Link UP Nov 6 23:58:53.221954 systemd-networkd[1522]: lxc_health: Gained carrier Nov 6 23:58:53.301147 kubelet[2772]: E1106 23:58:53.301080 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:53.504819 kubelet[2772]: E1106 23:58:53.504606 2772 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:58:54.083805 containerd[1626]: time="2025-11-06T23:58:54.083749425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" id:\"e0409ec2ce111c8d67c1eedbb366195ad2e391864170118f3eb39b1ab5904937\" pid:5421 exited_at:{seconds:1762473534 nanos:83209786}" Nov 6 23:58:54.590379 systemd-networkd[1522]: lxc_health: Gained IPv6LL Nov 6 23:58:56.303162 containerd[1626]: time="2025-11-06T23:58:56.302617508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" id:\"e2343f28b91af8a5f13032b171c82389d91516cdf0751e44dd6eacf462e5c111\" pid:5453 exited_at:{seconds:1762473536 nanos:302249218}" Nov 6 23:58:58.408642 containerd[1626]: time="2025-11-06T23:58:58.408595868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" id:\"6465103467dfe88761055afa2b2e9fc705b45308b66cb14f3feb1c57b0dc6150\" pid:5484 exited_at:{seconds:1762473538 nanos:408253627}" Nov 6 23:59:00.518671 containerd[1626]: time="2025-11-06T23:59:00.518621354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8934f5c05db4629b49c7e9fcb122e3fb226fc5f96c8f5e0587f9479a5275f925\" id:\"4af2e94c2a868b9745f2a40d3f2c7b80bbe2e191458b47508a0addfe3aeac287\" pid:5507 exited_at:{seconds:1762473540 nanos:518170370}" Nov 6 23:59:00.538381 sshd[4599]: Connection closed by 10.0.0.1 port 47760 Nov 6 23:59:00.538824 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Nov 6 23:59:00.543713 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:47760.service: Deactivated successfully. Nov 6 23:59:00.546176 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 23:59:00.547075 systemd-logind[1598]: Session 28 logged out. Waiting for processes to exit. Nov 6 23:59:00.548534 systemd-logind[1598]: Removed session 28.