Nov 8 00:28:24.939662 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:28:24.939685 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:24.939699 kernel: BIOS-provided physical RAM map: Nov 8 00:28:24.939705 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:28:24.939711 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:28:24.939718 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:28:24.939725 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 8 00:28:24.939732 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 8 00:28:24.939738 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:28:24.939747 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:28:24.939754 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:28:24.939760 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:28:24.939770 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:28:24.939777 kernel: NX (Execute Disable) protection: active Nov 8 00:28:24.939785 kernel: APIC: Static calls initialized Nov 8 00:28:24.939798 kernel: SMBIOS 2.8 present. Nov 8 00:28:24.939805 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 8 00:28:24.939812 kernel: Hypervisor detected: KVM Nov 8 00:28:24.939819 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:28:24.939826 kernel: kvm-clock: using sched offset of 3624491605 cycles Nov 8 00:28:24.939834 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:28:24.939841 kernel: tsc: Detected 2794.748 MHz processor Nov 8 00:28:24.939848 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:28:24.939856 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:28:24.939863 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 8 00:28:24.939873 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:28:24.939880 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:28:24.939887 kernel: Using GB pages for direct mapping Nov 8 00:28:24.939894 kernel: ACPI: Early table checksum verification disabled Nov 8 00:28:24.939901 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 8 00:28:24.939909 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939916 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939923 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939933 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 8 00:28:24.939940 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939947 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939954 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939961 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:28:24.939968 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 8 00:28:24.939975 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 8 00:28:24.939987 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 8 00:28:24.939997 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 8 00:28:24.940004 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 8 00:28:24.940011 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 8 00:28:24.940019 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 8 00:28:24.940026 kernel: No NUMA configuration found Nov 8 00:28:24.940033 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 8 00:28:24.940041 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 8 00:28:24.940051 kernel: Zone ranges: Nov 8 00:28:24.940058 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:28:24.940066 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 8 00:28:24.940073 kernel: Normal empty Nov 8 00:28:24.940081 kernel: Movable zone start for each node Nov 8 00:28:24.940088 kernel: Early memory node ranges Nov 8 00:28:24.940095 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:28:24.940103 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 8 00:28:24.940110 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 8 00:28:24.940122 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:28:24.940135 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:28:24.940144 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:28:24.940153 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:28:24.940163 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:28:24.940172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:28:24.940182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:28:24.940192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:28:24.940201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:28:24.940215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:28:24.940225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:28:24.940244 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:28:24.940254 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:28:24.940263 kernel: TSC deadline timer available Nov 8 00:28:24.940273 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:28:24.940282 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:28:24.940290 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:28:24.940301 kernel: kvm-guest: setup PV sched yield Nov 8 00:28:24.940312 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:28:24.940319 kernel: Booting paravirtualized kernel on KVM Nov 8 00:28:24.940327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:28:24.940334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:28:24.940342 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:28:24.940349 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:28:24.940356 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:28:24.940363 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:28:24.940371 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:28:24.940382 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:24.940423 kernel: random: crng init done Nov 8 00:28:24.940431 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:28:24.940439 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:28:24.940446 kernel: Fallback order for Node 0: 0 Nov 8 00:28:24.940453 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 8 00:28:24.940461 kernel: Policy zone: DMA32 Nov 8 00:28:24.940468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:28:24.940476 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Nov 8 00:28:24.940487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:28:24.940494 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:28:24.940502 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:28:24.940509 kernel: Dynamic Preempt: voluntary Nov 8 00:28:24.940517 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:28:24.940525 kernel: rcu: RCU event tracing is enabled. Nov 8 00:28:24.940533 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:28:24.940540 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:28:24.940548 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:28:24.940558 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:28:24.940565 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:28:24.940573 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:28:24.940584 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:28:24.940591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:28:24.940599 kernel: Console: colour VGA+ 80x25 Nov 8 00:28:24.940606 kernel: printk: console [ttyS0] enabled Nov 8 00:28:24.940613 kernel: ACPI: Core revision 20230628 Nov 8 00:28:24.940621 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:28:24.940653 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:28:24.940660 kernel: x2apic enabled Nov 8 00:28:24.940668 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:28:24.940675 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:28:24.940683 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:28:24.940690 kernel: kvm-guest: setup PV IPIs Nov 8 00:28:24.940698 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:28:24.940716 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:28:24.940724 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 8 00:28:24.940732 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:28:24.940740 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:28:24.940750 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:28:24.940758 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:28:24.940766 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:28:24.940774 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:28:24.940782 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:28:24.940792 kernel: active return thunk: retbleed_return_thunk Nov 8 00:28:24.940800 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:28:24.940811 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:28:24.940819 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:28:24.940828 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:28:24.940836 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:28:24.940844 kernel: active return thunk: srso_return_thunk Nov 8 00:28:24.940852 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:28:24.940860 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:28:24.940870 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:28:24.940878 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:28:24.940886 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:28:24.940894 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:28:24.940902 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:28:24.940909 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:28:24.940917 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:28:24.940925 kernel: landlock: Up and running. Nov 8 00:28:24.940932 kernel: SELinux: Initializing. Nov 8 00:28:24.940943 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:28:24.940951 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:28:24.940959 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:28:24.940967 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:28:24.940975 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:28:24.940983 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:28:24.940991 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:28:24.941000 kernel: ... version: 0 Nov 8 00:28:24.941011 kernel: ... bit width: 48 Nov 8 00:28:24.941019 kernel: ... generic registers: 6 Nov 8 00:28:24.941026 kernel: ... value mask: 0000ffffffffffff Nov 8 00:28:24.941034 kernel: ... max period: 00007fffffffffff Nov 8 00:28:24.941042 kernel: ... fixed-purpose events: 0 Nov 8 00:28:24.941049 kernel: ... event mask: 000000000000003f Nov 8 00:28:24.941057 kernel: signal: max sigframe size: 1776 Nov 8 00:28:24.941065 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:28:24.941073 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:28:24.941083 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:28:24.941091 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:28:24.941098 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:28:24.941106 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:28:24.941114 kernel: smpboot: Max logical packages: 1 Nov 8 00:28:24.941122 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 8 00:28:24.941129 kernel: devtmpfs: initialized Nov 8 00:28:24.941137 kernel: x86/mm: Memory block size: 128MB Nov 8 00:28:24.941145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:28:24.941153 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:28:24.941163 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:28:24.941171 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:28:24.941179 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:28:24.941187 kernel: audit: type=2000 audit(1762561703.878:1): state=initialized audit_enabled=0 res=1 Nov 8 00:28:24.941195 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:28:24.941202 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:28:24.941210 kernel: cpuidle: using governor menu Nov 8 00:28:24.941218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:28:24.941225 kernel: dca service started, version 1.12.1 Nov 8 00:28:24.941246 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:28:24.941254 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:28:24.941264 kernel: PCI: Using configuration type 1 for base access Nov 8 00:28:24.941274 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:28:24.941285 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:28:24.941296 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:28:24.941306 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:28:24.941316 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:28:24.941332 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:28:24.941343 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:28:24.941351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:28:24.941359 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:28:24.941367 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:28:24.941378 kernel: ACPI: Interpreter enabled Nov 8 00:28:24.941409 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:28:24.941420 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:28:24.941428 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:28:24.941436 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:28:24.941448 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:28:24.941456 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:28:24.941695 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:28:24.941834 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:28:24.941963 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:28:24.941973 kernel: PCI host bridge to bus 0000:00 Nov 8 00:28:24.942114 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:28:24.942249 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:28:24.942369 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:28:24.942507 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:28:24.942626 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:24.942745 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:28:24.942861 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:28:24.943027 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:28:24.943180 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:28:24.943415 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:28:24.943550 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:28:24.943677 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:28:24.943803 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:28:24.943965 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:28:24.944104 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 00:28:24.944242 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:28:24.944372 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:28:24.944572 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:28:24.944774 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:28:24.944907 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:28:24.945091 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:28:24.945260 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:28:24.945407 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 8 00:28:24.945541 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:28:24.945668 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 8 00:28:24.945794 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:28:24.945939 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:28:24.946067 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:28:24.946254 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:28:24.946445 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 8 00:28:24.946583 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 8 00:28:24.946728 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:28:24.946858 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:28:24.946869 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:28:24.946883 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:28:24.946891 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:28:24.946900 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:28:24.946908 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:28:24.946915 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:28:24.946923 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:28:24.946931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:28:24.946939 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:28:24.946947 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:28:24.946958 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:28:24.946965 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:28:24.946973 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:28:24.946981 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:28:24.946989 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:28:24.946997 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:28:24.947005 kernel: iommu: Default domain type: Translated Nov 8 00:28:24.947013 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:28:24.947021 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:28:24.947032 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:28:24.947040 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:28:24.947048 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 8 00:28:24.947203 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:28:24.947361 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:28:24.947504 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:28:24.947516 kernel: vgaarb: loaded Nov 8 00:28:24.947524 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:28:24.947532 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:28:24.947545 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:28:24.947553 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:28:24.947561 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:28:24.947569 kernel: pnp: PnP ACPI init Nov 8 00:28:24.947726 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:28:24.947738 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:28:24.947747 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:28:24.947755 kernel: NET: Registered PF_INET protocol family Nov 8 00:28:24.947767 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:28:24.947775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:28:24.947783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:28:24.947791 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:28:24.947800 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:28:24.947808 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:28:24.947816 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:28:24.947824 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:28:24.947832 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:28:24.947843 kernel: NET: Registered PF_XDP protocol family Nov 8 00:28:24.947963 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:28:24.948080 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:28:24.948202 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:28:24.948329 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:28:24.948466 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:24.948585 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:28:24.948595 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:28:24.948608 kernel: Initialise system trusted keyrings Nov 8 00:28:24.948616 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:28:24.948624 kernel: Key type asymmetric registered Nov 8 00:28:24.948632 kernel: Asymmetric key parser 'x509' registered Nov 8 00:28:24.948640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:28:24.948647 kernel: io scheduler mq-deadline registered Nov 8 00:28:24.948655 kernel: io scheduler kyber registered Nov 8 00:28:24.948663 kernel: io scheduler bfq registered Nov 8 00:28:24.948671 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:28:24.948682 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:28:24.948690 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:28:24.948698 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:28:24.948706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:28:24.948715 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:28:24.948723 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:28:24.948731 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:28:24.948739 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:28:24.948891 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:28:24.948908 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:28:24.949028 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:28:24.949149 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:28:24 UTC (1762561704) Nov 8 00:28:24.949280 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:28:24.949290 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:28:24.949299 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:28:24.949307 kernel: Segment Routing with IPv6 Nov 8 00:28:24.949316 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:28:24.949328 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:28:24.949335 kernel: Key type dns_resolver registered Nov 8 00:28:24.949343 kernel: IPI shorthand broadcast: enabled Nov 8 00:28:24.949351 kernel: sched_clock: Marking stable (848003175, 191261496)->(1097914141, -58649470) Nov 8 00:28:24.949359 kernel: registered taskstats version 1 Nov 8 00:28:24.949367 kernel: Loading compiled-in X.509 certificates Nov 8 00:28:24.949375 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:28:24.949383 kernel: Key type .fscrypt registered Nov 8 00:28:24.949403 kernel: Key type fscrypt-provisioning registered Nov 8 00:28:24.949415 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:28:24.949423 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:28:24.949431 kernel: ima: No architecture policies found Nov 8 00:28:24.949439 kernel: clk: Disabling unused clocks Nov 8 00:28:24.949447 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:28:24.949455 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:28:24.949463 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:28:24.949470 kernel: Run /init as init process Nov 8 00:28:24.949478 kernel: with arguments: Nov 8 00:28:24.949489 kernel: /init Nov 8 00:28:24.949497 kernel: with environment: Nov 8 00:28:24.949505 kernel: HOME=/ Nov 8 00:28:24.949512 kernel: TERM=linux Nov 8 00:28:24.949522 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:24.949533 systemd[1]: Detected virtualization kvm. Nov 8 00:28:24.949542 systemd[1]: Detected architecture x86-64. Nov 8 00:28:24.949550 systemd[1]: Running in initrd. Nov 8 00:28:24.949562 systemd[1]: No hostname configured, using default hostname. Nov 8 00:28:24.949570 systemd[1]: Hostname set to . Nov 8 00:28:24.949578 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:28:24.949586 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:28:24.949595 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:24.949603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:24.949612 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:28:24.949621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:24.949632 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:28:24.949654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:28:24.949667 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:28:24.949676 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:28:24.949688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:24.949696 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:24.949705 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:24.949714 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:24.949722 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:24.949731 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:24.949739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:24.949748 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:24.949757 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:28:24.949768 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:28:24.949778 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:24.949787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:24.949795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:24.949804 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:24.949813 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:28:24.949821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:24.949830 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:28:24.949838 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:28:24.949850 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:24.949859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:24.949867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:24.949876 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:24.949888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:24.949921 systemd-journald[192]: Collecting audit messages is disabled. Nov 8 00:28:24.949944 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:28:24.949958 systemd-journald[192]: Journal started Nov 8 00:28:24.949976 systemd-journald[192]: Runtime Journal (/run/log/journal/174dcb0586f44d6fa9c971fef980d859) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:28:24.945075 systemd-modules-load[193]: Inserted module 'overlay' Nov 8 00:28:25.021890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:28:25.021942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:28:25.021971 kernel: Bridge firewalling registered Nov 8 00:28:25.021998 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:24.972847 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 8 00:28:25.025699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:25.028748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:25.043561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:25.047634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:25.052555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:25.054088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:25.063300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:25.070017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:25.076268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:25.077551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:25.083216 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:28:25.087532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:25.088447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:25.107025 dracut-cmdline[228]: dracut-dracut-053 Nov 8 00:28:25.110878 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:25.127575 systemd-resolved[229]: Positive Trust Anchors: Nov 8 00:28:25.127594 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:25.127626 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:25.130408 systemd-resolved[229]: Defaulting to hostname 'linux'. Nov 8 00:28:25.131666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:25.143264 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:25.224445 kernel: SCSI subsystem initialized Nov 8 00:28:25.234419 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:28:25.245416 kernel: iscsi: registered transport (tcp) Nov 8 00:28:25.267474 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:28:25.267569 kernel: QLogic iSCSI HBA Driver Nov 8 00:28:25.320296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:25.334541 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:28:25.363547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:28:25.363625 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:28:25.365408 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:28:25.409439 kernel: raid6: avx2x4 gen() 26799 MB/s Nov 8 00:28:25.426420 kernel: raid6: avx2x2 gen() 26002 MB/s Nov 8 00:28:25.444243 kernel: raid6: avx2x1 gen() 24152 MB/s Nov 8 00:28:25.444289 kernel: raid6: using algorithm avx2x4 gen() 26799 MB/s Nov 8 00:28:25.462161 kernel: raid6: .... xor() 6901 MB/s, rmw enabled Nov 8 00:28:25.462186 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:28:25.483423 kernel: xor: automatically using best checksumming function avx Nov 8 00:28:25.646442 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:28:25.661350 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:25.675600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:25.692727 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 8 00:28:25.699297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:25.710630 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:28:25.726919 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 8 00:28:25.761817 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:25.776594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:25.850839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:25.865610 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:28:25.882048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:25.904466 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:28:25.904694 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:28:25.887329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:25.891514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:25.900534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:25.918651 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:28:25.925566 kernel: libata version 3.00 loaded. Nov 8 00:28:25.925595 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:28:25.930824 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:28:25.930916 kernel: AES CTR mode by8 optimization enabled Nov 8 00:28:25.931430 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:28:25.932972 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:25.943333 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:28:25.943357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:28:25.943371 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:28:25.948598 kernel: GPT:9289727 != 19775487 Nov 8 00:28:25.948611 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:28:25.948770 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:28:25.948781 kernel: GPT:9289727 != 19775487 Nov 8 00:28:25.948791 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:28:25.948801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:28:25.948811 kernel: scsi host0: ahci Nov 8 00:28:25.949000 kernel: scsi host1: ahci Nov 8 00:28:25.955517 kernel: scsi host2: ahci Nov 8 00:28:25.955676 kernel: scsi host3: ahci Nov 8 00:28:25.949985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:25.958116 kernel: scsi host4: ahci Nov 8 00:28:25.950105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:25.978889 kernel: scsi host5: ahci Nov 8 00:28:25.979085 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 8 00:28:25.979098 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 8 00:28:25.979109 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 8 00:28:25.979126 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 8 00:28:25.979137 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 8 00:28:25.979147 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 8 00:28:25.979157 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (459) Nov 8 00:28:25.952328 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:25.983015 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Nov 8 00:28:25.966979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:25.967126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:25.973996 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:25.986938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:26.006226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:28:26.073353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:26.085727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:28:26.086361 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:28:26.094850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:28:26.103012 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:28:26.120552 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:28:26.125098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:26.130525 disk-uuid[555]: Primary Header is updated. Nov 8 00:28:26.130525 disk-uuid[555]: Secondary Entries is updated. Nov 8 00:28:26.130525 disk-uuid[555]: Secondary Header is updated. Nov 8 00:28:26.136420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:28:26.141419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:28:26.145847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:26.287415 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:28:26.287508 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:28:26.287520 kernel: ata3.00: applying bridge limits Nov 8 00:28:26.287531 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:28:26.287542 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:28:26.290428 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:28:26.290500 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:28:26.291422 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:28:26.292425 kernel: ata3.00: configured for UDMA/100 Nov 8 00:28:26.295422 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:28:26.353419 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:28:26.353717 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:28:26.372423 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:28:27.144295 disk-uuid[558]: The operation has completed successfully. Nov 8 00:28:27.147272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:28:27.381915 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:28:27.382181 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:28:27.402815 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:28:27.409451 sh[589]: Success Nov 8 00:28:27.426431 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:28:27.470772 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:28:27.481304 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:28:27.486790 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:28:27.500640 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:28:27.500732 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:27.500747 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:28:27.502476 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:28:27.503806 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:28:27.509952 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:28:27.512466 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:28:27.523581 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:28:27.526250 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:28:27.540169 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:27.540240 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:27.540256 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:28:27.544656 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:28:27.560683 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:28:27.563555 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:27.588906 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:28:27.622261 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:28:27.746768 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:27.767811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:27.780564 ignition[691]: Ignition 2.19.0 Nov 8 00:28:27.780578 ignition[691]: Stage: fetch-offline Nov 8 00:28:27.780647 ignition[691]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:27.780663 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:28:27.780780 ignition[691]: parsed url from cmdline: "" Nov 8 00:28:27.780785 ignition[691]: no config URL provided Nov 8 00:28:27.780792 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:28:27.780805 ignition[691]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:28:27.780847 ignition[691]: op(1): [started] loading QEMU firmware config module Nov 8 00:28:27.780856 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:28:27.790673 ignition[691]: op(1): [finished] loading QEMU firmware config module Nov 8 00:28:27.794449 systemd-networkd[775]: lo: Link UP Nov 8 00:28:27.794455 systemd-networkd[775]: lo: Gained carrier Nov 8 00:28:27.796742 systemd-networkd[775]: Enumeration completed Nov 8 00:28:27.796905 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:27.797297 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:28:27.797302 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:28:27.800698 systemd[1]: Reached target network.target - Network. Nov 8 00:28:27.801868 systemd-networkd[775]: eth0: Link UP Nov 8 00:28:27.801874 systemd-networkd[775]: eth0: Gained carrier Nov 8 00:28:27.801883 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:28:27.828479 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:28:27.896309 ignition[691]: parsing config with SHA512: 5cce8ef8ba785c39d3af7133cd30b9d591818c3b4204632145aa566cc4a48e460f8dc9a36c8913a253fd677e9ef6440ead0588c3ff4a04e90ed3f4b230342407 Nov 8 00:28:27.960087 unknown[691]: fetched base config from "system" Nov 8 00:28:27.960111 unknown[691]: fetched user config from "qemu" Nov 8 00:28:27.960574 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.140 Nov 8 00:28:27.960593 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Nov 8 00:28:27.966542 ignition[691]: fetch-offline: fetch-offline passed Nov 8 00:28:27.966755 ignition[691]: Ignition finished successfully Nov 8 00:28:27.973119 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:27.977580 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:28:27.994555 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:28:28.020467 ignition[780]: Ignition 2.19.0 Nov 8 00:28:28.020487 ignition[780]: Stage: kargs Nov 8 00:28:28.020722 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:28.020737 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:28:28.022007 ignition[780]: kargs: kargs passed Nov 8 00:28:28.022074 ignition[780]: Ignition finished successfully Nov 8 00:28:28.029608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:28:28.104813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:28:28.132618 ignition[788]: Ignition 2.19.0 Nov 8 00:28:28.132632 ignition[788]: Stage: disks Nov 8 00:28:28.132841 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:28.132856 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:28:28.133953 ignition[788]: disks: disks passed Nov 8 00:28:28.134010 ignition[788]: Ignition finished successfully Nov 8 00:28:28.189571 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:28:28.193163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:28.193860 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:28:28.197901 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:28.204268 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:28.204930 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:28.221551 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:28:28.241631 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:28:28.340463 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:28:28.349515 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:28:28.473416 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:28:28.473836 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:28:28.475726 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:28.490626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:28.493661 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:28:28.501935 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Nov 8 00:28:28.503495 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:28.496312 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:28:28.507859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:28.507876 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:28:28.496364 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:28:28.496410 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:28.503948 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:28:28.509413 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:28:28.527766 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:28:28.528426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:28.563909 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:28:28.571238 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:28:28.580956 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:28:28.586588 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:28:28.714900 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:28.729520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:28:28.734191 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:28:28.739044 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:28:28.743435 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:28.763701 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:28:28.879298 ignition[921]: INFO : Ignition 2.19.0 Nov 8 00:28:28.879298 ignition[921]: INFO : Stage: mount Nov 8 00:28:28.882370 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:28.882370 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:28:28.882370 ignition[921]: INFO : mount: mount passed Nov 8 00:28:28.882370 ignition[921]: INFO : Ignition finished successfully Nov 8 00:28:28.884059 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:28:28.894962 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:28:28.905244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:28.921700 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (933) Nov 8 00:28:28.921737 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:28.921752 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:28.923213 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:28:28.928429 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:28:28.931907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:28.966638 ignition[950]: INFO : Ignition 2.19.0 Nov 8 00:28:28.966638 ignition[950]: INFO : Stage: files Nov 8 00:28:28.969422 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:28.969422 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:28:28.969422 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:28:28.969422 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:28:28.969422 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:28:28.980483 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:28:28.980483 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:28:28.980483 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:28:28.980483 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:28:28.980483 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:28:28.973270 unknown[950]: wrote ssh authorized keys file for user: core Nov 8 00:28:29.015716 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:28:29.087684 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:28:29.087684 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:28:29.094342 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 8 00:28:29.182336 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:28:29.204607 systemd-networkd[775]: eth0: Gained IPv6LL Nov 8 00:28:29.527506 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:28:29.527506 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:29.533211 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:29.533211 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:29.538778 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:29.541571 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:29.544726 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:29.547587 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:29.550453 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:29.553474 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:29.556498 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:29.559326 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:28:29.563533 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:28:29.567559 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:28:29.571027 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:28:29.925613 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:28:31.067306 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:28:31.067306 ignition[950]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:28:31.073336 ignition[950]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:28:31.125144 ignition[950]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:28:31.133838 ignition[950]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:28:31.136631 ignition[950]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:28:31.136631 ignition[950]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:31.136631 ignition[950]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:31.136631 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:31.136631 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:31.136631 ignition[950]: INFO : files: files passed Nov 8 00:28:31.136631 ignition[950]: INFO : Ignition finished successfully Nov 8 00:28:31.154808 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:28:31.177587 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:28:31.181806 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:28:31.183228 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:28:31.183374 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:28:31.201487 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:28:31.208322 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:31.208322 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:31.213603 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:31.217523 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:31.220221 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:28:31.238551 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:28:31.271476 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:28:31.271612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:28:31.276111 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:28:31.280509 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:28:31.282639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:28:31.299600 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:28:31.316038 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:31.321918 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:28:31.346691 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:31.347530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:31.348059 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:28:31.430598 ignition[1004]: INFO : Ignition 2.19.0 Nov 8 00:28:31.430598 ignition[1004]: INFO : Stage: umount Nov 8 00:28:31.430598 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:31.430598 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:28:31.430598 ignition[1004]: INFO : umount: umount passed Nov 8 00:28:31.430598 ignition[1004]: INFO : Ignition finished successfully Nov 8 00:28:31.348367 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:28:31.348519 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:31.348943 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:28:31.349242 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:28:31.349815 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:28:31.350123 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:31.350446 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:31.350714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:28:31.350983 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:31.352076 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:28:31.352373 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:28:31.352949 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:28:31.353192 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:28:31.353328 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:31.356694 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:31.357005 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:31.357582 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:28:31.357722 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:31.358127 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:28:31.358280 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:31.358714 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:31.358846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:31.359112 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:31.359309 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:31.362514 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:31.363127 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:31.363906 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:31.364222 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:31.364340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:31.364827 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:31.364925 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:31.365102 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:28:31.365238 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:31.365388 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:28:31.365510 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:28:31.366547 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:28:31.367040 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:31.367159 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:31.368085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:28:31.368524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:28:31.368642 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:31.368879 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:28:31.368983 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:31.372949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:28:31.373088 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:28:31.391185 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:28:31.391343 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:28:31.392206 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:31.392421 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:28:31.392487 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:31.392710 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:31.392763 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:31.392993 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:31.393043 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:31.393303 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:31.393354 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:31.394109 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:31.395107 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:31.397683 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:28:31.426256 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:31.426445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:31.431108 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:31.431187 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:31.432975 systemd-networkd[775]: eth0: DHCPv6 lease lost Nov 8 00:28:31.435130 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:31.435338 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:31.439221 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:31.439268 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:31.454501 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:31.456867 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:31.456930 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:31.461073 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:31.461133 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:31.464843 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:31.464900 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:31.465997 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:31.477411 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:31.477615 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:31.494479 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:31.494737 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:31.498452 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:31.498516 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:31.502022 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:31.502075 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:31.505823 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:31.505889 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:31.509317 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:31.509374 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:31.512975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:31.513032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:31.527522 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:31.530617 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:31.530688 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:31.534943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:31.535007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:31.538947 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:31.539073 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:31.885646 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:28:31.885825 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:28:31.889290 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:31.892418 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:31.892490 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:31.906578 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:31.916383 systemd[1]: Switching root. Nov 8 00:28:31.949511 systemd-journald[192]: Journal stopped Nov 8 00:28:33.999165 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 8 00:28:33.999279 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:28:33.999304 kernel: SELinux: policy capability open_perms=1 Nov 8 00:28:33.999318 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:28:33.999332 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:28:33.999353 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:28:33.999374 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:28:33.999404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:28:33.999418 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:28:33.999440 kernel: audit: type=1403 audit(1762561712.969:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:28:33.999456 systemd[1]: Successfully loaded SELinux policy in 44.429ms. Nov 8 00:28:33.999487 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.232ms. Nov 8 00:28:33.999503 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:33.999519 systemd[1]: Detected virtualization kvm. Nov 8 00:28:33.999534 systemd[1]: Detected architecture x86-64. Nov 8 00:28:33.999549 systemd[1]: Detected first boot. Nov 8 00:28:33.999571 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:28:33.999586 zram_generator::config[1049]: No configuration found. Nov 8 00:28:33.999611 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:28:33.999626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:28:33.999641 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:28:33.999656 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:33.999672 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:28:33.999687 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:28:33.999702 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:28:33.999717 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:28:33.999740 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:28:33.999755 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:28:33.999770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:28:33.999784 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:28:33.999803 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:33.999818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:33.999842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:28:33.999880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:28:33.999911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:28:33.999938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:33.999953 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:28:33.999979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:34.000011 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:28:34.000026 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:28:34.000049 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:34.000064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:28:34.000086 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:34.000102 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:34.000127 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:34.000142 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:34.000158 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:28:34.000174 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:28:34.000190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:34.000205 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:34.000219 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:34.000235 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:28:34.000257 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:28:34.000282 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:28:34.000296 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:28:34.000312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:34.000328 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:28:34.000344 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:28:34.000359 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:28:34.000376 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:28:34.000413 systemd[1]: Reached target machines.target - Containers. Nov 8 00:28:34.000430 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:28:34.000445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:34.000461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:34.000476 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:28:34.000490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:34.000506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:34.000521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:34.000536 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:28:34.000558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:34.000574 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:28:34.000590 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:28:34.000604 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:28:34.000619 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:28:34.000647 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:28:34.000662 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:34.000677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:34.000715 systemd-journald[1112]: Collecting audit messages is disabled. Nov 8 00:28:34.000749 kernel: loop: module loaded Nov 8 00:28:34.000764 systemd-journald[1112]: Journal started Nov 8 00:28:34.000792 systemd-journald[1112]: Runtime Journal (/run/log/journal/174dcb0586f44d6fa9c971fef980d859) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:28:33.581788 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:28:33.601457 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:28:33.601945 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:28:33.602319 systemd[1]: systemd-journald.service: Consumed 1.089s CPU time. Nov 8 00:28:34.003429 kernel: fuse: init (API version 7.39) Nov 8 00:28:34.007824 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:28:34.014322 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:28:34.018554 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:34.022431 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:28:34.022456 systemd[1]: Stopped verity-setup.service. Nov 8 00:28:34.030430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:34.047420 kernel: ACPI: bus type drm_connector registered Nov 8 00:28:34.052666 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:34.053900 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:28:34.055823 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:28:34.057825 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:28:34.059748 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:28:34.061710 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:28:34.063683 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:28:34.065623 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:34.100471 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:28:34.100733 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:28:34.121649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:34.121859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:34.124129 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:34.124318 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:34.126588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:34.126773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:34.129153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:28:34.129340 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:28:34.131494 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:34.131679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:34.133820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:34.135987 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:28:34.138704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:28:34.155216 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:28:34.164540 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:28:34.168127 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:28:34.169988 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:28:34.170025 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:34.172727 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:28:34.175957 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:28:34.179515 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:28:34.181342 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:34.186593 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:28:34.190714 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:28:34.192734 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:34.196575 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:28:34.198604 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:34.205852 systemd-journald[1112]: Time spent on flushing to /var/log/journal/174dcb0586f44d6fa9c971fef980d859 is 27.360ms for 951 entries. Nov 8 00:28:34.205852 systemd-journald[1112]: System Journal (/var/log/journal/174dcb0586f44d6fa9c971fef980d859) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:28:34.452915 systemd-journald[1112]: Received client request to flush runtime journal. Nov 8 00:28:34.452995 kernel: loop0: detected capacity change from 0 to 229808 Nov 8 00:28:34.453036 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:28:34.453061 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:28:34.453095 kernel: loop2: detected capacity change from 0 to 140768 Nov 8 00:28:34.203540 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:34.207701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:28:34.211635 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:28:34.213785 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:28:34.216090 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:28:34.226767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:34.239879 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:28:34.261891 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:28:34.286240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:34.383510 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:28:34.387011 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:28:34.403753 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:28:34.457249 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:28:34.461549 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:28:34.476735 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:28:34.519649 kernel: loop3: detected capacity change from 0 to 229808 Nov 8 00:28:34.541516 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:28:34.541081 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:28:34.554227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:34.559156 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:28:34.560372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:28:34.561524 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:28:34.570731 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:28:34.573184 (sd-merge)[1183]: Merged extensions into '/usr'. Nov 8 00:28:34.578655 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:28:34.578758 systemd[1]: Reloading... Nov 8 00:28:34.609634 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 8 00:28:34.609661 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 8 00:28:34.807132 zram_generator::config[1217]: No configuration found. Nov 8 00:28:35.032510 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:28:35.054775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:35.113191 systemd[1]: Reloading finished in 533 ms. Nov 8 00:28:35.249220 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:28:35.251633 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:28:35.254172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:35.279806 systemd[1]: Starting ensure-sysext.service... Nov 8 00:28:35.283943 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:35.291521 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:28:35.291540 systemd[1]: Reloading... Nov 8 00:28:35.323630 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:28:35.324404 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:28:35.325362 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:28:35.325674 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Nov 8 00:28:35.325759 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Nov 8 00:28:35.332215 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:35.332310 systemd-tmpfiles[1253]: Skipping /boot Nov 8 00:28:35.347733 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:35.347751 systemd-tmpfiles[1253]: Skipping /boot Nov 8 00:28:35.362417 zram_generator::config[1286]: No configuration found. Nov 8 00:28:35.501158 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:35.567541 systemd[1]: Reloading finished in 275 ms. Nov 8 00:28:35.588425 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:28:35.602919 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:35.615527 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:35.619272 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:28:35.622951 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:28:35.628436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:35.635797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:35.642735 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:28:35.647242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:35.647505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:35.648862 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:35.653499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:35.660505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:35.663192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:35.666690 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:28:35.668454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:35.670303 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:28:35.673934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:35.674207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:35.677712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:35.678012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:35.681466 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:35.681610 augenrules[1344]: No rules Nov 8 00:28:35.681755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:35.684919 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:35.693833 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Nov 8 00:28:35.699440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:35.699721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:35.710299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:35.715344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:35.732798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:35.734735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:35.739242 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:28:35.741142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:35.742460 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:28:35.745023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:35.748414 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:28:35.751586 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:28:35.755126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:35.755356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:35.760137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:35.760453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:35.763205 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:35.763441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:35.773357 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:28:35.786714 systemd[1]: Finished ensure-sysext.service. Nov 8 00:28:35.796011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:35.796178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:35.802595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:35.807129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:35.811618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:35.824594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:35.826564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:35.834659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:35.840461 systemd-resolved[1324]: Positive Trust Anchors: Nov 8 00:28:35.840471 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:35.840504 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:35.843448 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:28:35.890199 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1376) Nov 8 00:28:35.871165 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:28:35.871206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:35.872242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:35.872591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:35.877339 systemd-resolved[1324]: Defaulting to hostname 'linux'. Nov 8 00:28:35.878677 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:35.879030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:35.881974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:35.885256 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:35.885677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:35.889142 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:35.889421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:35.905293 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:28:35.920409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:28:35.923537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:35.925566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:35.925648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:35.939438 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:28:35.962435 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:28:35.988559 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:28:35.988931 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:28:35.989184 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:28:35.997607 systemd-networkd[1394]: lo: Link UP Nov 8 00:28:35.998006 systemd-networkd[1394]: lo: Gained carrier Nov 8 00:28:35.999944 systemd-networkd[1394]: Enumeration completed Nov 8 00:28:36.000559 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:36.004198 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:28:36.004203 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:28:36.014802 systemd-networkd[1394]: eth0: Link UP Nov 8 00:28:36.014813 systemd-networkd[1394]: eth0: Gained carrier Nov 8 00:28:36.014827 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:28:36.016336 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:28:36.020510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:28:36.022617 systemd[1]: Reached target network.target - Network. Nov 8 00:28:36.024136 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:28:36.115456 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:28:36.123032 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:28:36.141417 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:28:36.142656 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:28:36.144644 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Nov 8 00:28:36.145978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:28:36.152546 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:28:36.152602 systemd-timesyncd[1396]: Initial clock synchronization to Sat 2025-11-08 00:28:36.463249 UTC. Nov 8 00:28:36.157445 kernel: kvm_amd: TSC scaling supported Nov 8 00:28:36.157526 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:28:36.157540 kernel: kvm_amd: Nested Paging enabled Nov 8 00:28:36.157554 kernel: kvm_amd: LBR virtualization supported Nov 8 00:28:36.157566 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:28:36.158464 kernel: kvm_amd: Virtual GIF supported Nov 8 00:28:36.175284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:36.186526 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:28:36.227358 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:28:36.244648 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:28:36.312792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:36.322989 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:36.368573 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:28:36.387349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:36.389699 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:36.391915 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:28:36.394577 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:28:36.397141 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:28:36.399407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:28:36.401977 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:28:36.404456 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:28:36.404498 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:36.406265 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:36.409627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:28:36.413595 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:28:36.424920 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:28:36.428249 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:28:36.431107 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:28:36.433209 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:36.434989 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:36.436722 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:36.436753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:36.437970 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:28:36.441072 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:28:36.443470 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:36.446501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:28:36.450576 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:28:36.452489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:28:36.455811 jq[1431]: false Nov 8 00:28:36.456351 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:28:36.463521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:28:36.466598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:28:36.472071 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:28:36.479568 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:28:36.481869 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:28:36.482383 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:28:36.483266 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:28:36.487559 dbus-daemon[1430]: [system] SELinux support is enabled Nov 8 00:28:36.487699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:28:36.489109 extend-filesystems[1432]: Found loop3 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found loop4 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found loop5 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found sr0 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda1 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda2 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda3 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found usr Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda4 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda6 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda7 Nov 8 00:28:36.489109 extend-filesystems[1432]: Found vda9 Nov 8 00:28:36.489109 extend-filesystems[1432]: Checking size of /dev/vda9 Nov 8 00:28:36.499503 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:28:36.513382 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:28:36.513835 jq[1443]: true Nov 8 00:28:36.521024 update_engine[1441]: I20251108 00:28:36.520318 1441 main.cc:92] Flatcar Update Engine starting Nov 8 00:28:36.529196 update_engine[1441]: I20251108 00:28:36.521744 1441 update_check_scheduler.cc:74] Next update check in 8m30s Nov 8 00:28:36.529227 extend-filesystems[1432]: Resized partition /dev/vda9 Nov 8 00:28:36.542136 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1376) Nov 8 00:28:36.521839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:28:36.522085 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:28:36.522491 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:28:36.522705 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:28:36.525758 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:28:36.525963 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:28:36.555423 jq[1455]: true Nov 8 00:28:36.555627 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:28:36.577850 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:28:36.590760 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:28:36.598567 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:28:36.596181 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:28:36.596210 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:28:36.598836 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:28:36.598858 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:28:36.600561 tar[1454]: linux-amd64/LICENSE Nov 8 00:28:36.600561 tar[1454]: linux-amd64/helm Nov 8 00:28:36.608575 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:28:36.623567 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:28:36.623602 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:28:36.624466 systemd-logind[1438]: New seat seat0. Nov 8 00:28:36.630103 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:28:36.697827 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:28:36.723584 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:28:36.729738 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:28:36.729738 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:28:36.729738 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:28:36.737384 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Nov 8 00:28:36.739102 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:28:36.739208 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:28:36.739230 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:28:36.739641 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:28:36.743874 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:28:36.752253 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:28:36.793505 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:28:37.076785 systemd-networkd[1394]: eth0: Gained IPv6LL Nov 8 00:28:37.122066 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:28:37.124361 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:28:37.128190 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:28:37.133352 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:28:37.140572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:37.192644 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:28:37.194982 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:28:37.195215 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:28:37.262965 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:28:37.301700 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:28:37.307343 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:28:37.310705 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:28:37.311291 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:28:37.327810 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:28:37.329984 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:28:37.368853 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:28:37.371703 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:28:37.526914 tar[1454]: linux-amd64/README.md Nov 8 00:28:37.580126 containerd[1467]: time="2025-11-08T00:28:37.579851351Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.627763306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.631381305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.631450799Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.631480337Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.631843223Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.631871221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.632046742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.632068068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.632442528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.632471462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713076 containerd[1467]: time="2025-11-08T00:28:37.632496150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713467 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.632513115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.632698566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.633187805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.633457758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.633488577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.633694322Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.633823112Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.641072898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.641177926Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.641203165Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.641267799Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.641300273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:28:37.713682 containerd[1467]: time="2025-11-08T00:28:37.641676835Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.642456397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.642821865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.642844543Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.642941921Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.642996282Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643018451Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643034074Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643053006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643107294Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643149696Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643187468Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643226008Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643282878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714222 containerd[1467]: time="2025-11-08T00:28:37.643309657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643325873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643342349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643360917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643377299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643391652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643409231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643425020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643463343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643479954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643494306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643518901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643551279Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643579558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643594035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.714602 containerd[1467]: time="2025-11-08T00:28:37.643621065Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.643819963Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.643875719Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.643924866Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.643945244Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.643958640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.643982027Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.644023773Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:28:37.715026 containerd[1467]: time="2025-11-08T00:28:37.644046234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.644547630Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.644645643Z" level=info msg="Connect containerd service" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.644702876Z" level=info msg="using legacy CRI server" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.644711234Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.644896674Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.648029980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.649129568Z" level=info msg="Start subscribing containerd event" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.649262031Z" level=info msg="Start recovering state" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.649408149Z" level=info msg="Start event monitor" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.649469754Z" level=info msg="Start snapshots syncer" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.649490768Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.649512947Z" level=info msg="Start streaming server" Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.650800984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.650875692Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:28:37.715481 containerd[1467]: time="2025-11-08T00:28:37.650968636Z" level=info msg="containerd successfully booted in 0.079897s" Nov 8 00:28:37.723855 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:28:38.010201 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:28:38.028754 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:43014.service - OpenSSH per-connection server daemon (10.0.0.1:43014). Nov 8 00:28:38.104432 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 43014 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:38.109750 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:38.123521 systemd-logind[1438]: New session 1 of user core. Nov 8 00:28:38.125266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:28:38.291023 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:28:38.317607 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:28:38.332807 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:28:38.338191 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:28:38.517419 systemd[1542]: Queued start job for default target default.target. Nov 8 00:28:38.527516 systemd[1542]: Created slice app.slice - User Application Slice. Nov 8 00:28:38.527551 systemd[1542]: Reached target paths.target - Paths. Nov 8 00:28:38.527589 systemd[1542]: Reached target timers.target - Timers. Nov 8 00:28:38.530456 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:28:38.570837 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:28:38.570985 systemd[1542]: Reached target sockets.target - Sockets. Nov 8 00:28:38.571002 systemd[1542]: Reached target basic.target - Basic System. Nov 8 00:28:38.571047 systemd[1542]: Reached target default.target - Main User Target. Nov 8 00:28:38.571105 systemd[1542]: Startup finished in 224ms. Nov 8 00:28:38.571391 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:28:38.664732 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:28:38.739771 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Nov 8 00:28:38.791622 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:38.793902 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:38.800027 systemd-logind[1438]: New session 2 of user core. Nov 8 00:28:38.811651 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:28:38.876830 sshd[1553]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:38.890656 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:43024.service: Deactivated successfully. Nov 8 00:28:38.893906 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:28:38.896353 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:28:38.909416 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:43040.service - OpenSSH per-connection server daemon (10.0.0.1:43040). Nov 8 00:28:38.914370 systemd-logind[1438]: Removed session 2. Nov 8 00:28:38.953126 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 43040 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:38.955377 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:38.961356 systemd-logind[1438]: New session 3 of user core. Nov 8 00:28:38.970608 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:28:39.034984 sshd[1560]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:39.040022 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:43040.service: Deactivated successfully. Nov 8 00:28:39.042751 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:28:39.043984 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:28:39.045211 systemd-logind[1438]: Removed session 3. Nov 8 00:28:39.328480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:39.375921 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:39.377293 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:28:39.379594 systemd[1]: Startup finished in 990ms (kernel) + 8.250s (initrd) + 6.454s (userspace) = 15.695s. Nov 8 00:28:40.470666 kubelet[1571]: E1108 00:28:40.470587 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:40.475404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:40.475703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:40.476115 systemd[1]: kubelet.service: Consumed 2.928s CPU time. Nov 8 00:28:49.220167 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). Nov 8 00:28:49.261890 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:49.263882 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:49.268413 systemd-logind[1438]: New session 4 of user core. Nov 8 00:28:49.277606 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:28:49.336070 sshd[1585]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:49.348422 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:34884.service: Deactivated successfully. Nov 8 00:28:49.350599 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:28:49.352289 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:28:49.364857 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:34894.service - OpenSSH per-connection server daemon (10.0.0.1:34894). Nov 8 00:28:49.366019 systemd-logind[1438]: Removed session 4. Nov 8 00:28:49.402864 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 34894 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:49.404506 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:49.408977 systemd-logind[1438]: New session 5 of user core. Nov 8 00:28:49.426561 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:28:49.477447 sshd[1592]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:49.498528 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:34894.service: Deactivated successfully. Nov 8 00:28:49.500913 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:28:49.502730 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:28:49.504138 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:34902.service - OpenSSH per-connection server daemon (10.0.0.1:34902). Nov 8 00:28:49.504944 systemd-logind[1438]: Removed session 5. Nov 8 00:28:49.546891 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 34902 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:49.549143 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:49.553866 systemd-logind[1438]: New session 6 of user core. Nov 8 00:28:49.565554 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:28:49.622416 sshd[1599]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:49.639835 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:34902.service: Deactivated successfully. Nov 8 00:28:49.642326 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:28:49.644088 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:28:49.655873 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:34918.service - OpenSSH per-connection server daemon (10.0.0.1:34918). Nov 8 00:28:49.656919 systemd-logind[1438]: Removed session 6. Nov 8 00:28:49.690604 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 34918 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:49.692490 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:49.697853 systemd-logind[1438]: New session 7 of user core. Nov 8 00:28:49.707580 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:28:49.769436 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:28:49.769848 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:49.786718 sudo[1610]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:49.789580 sshd[1606]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:49.800605 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:34918.service: Deactivated successfully. Nov 8 00:28:49.802817 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:28:49.806524 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:28:49.818763 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:34922.service - OpenSSH per-connection server daemon (10.0.0.1:34922). Nov 8 00:28:49.820040 systemd-logind[1438]: Removed session 7. Nov 8 00:28:49.859330 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34922 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:49.861218 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:49.865773 systemd-logind[1438]: New session 8 of user core. Nov 8 00:28:49.875571 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:28:49.930321 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:28:49.930710 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:49.934602 sudo[1619]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:49.941169 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:28:49.941613 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:49.960669 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:49.962488 auditctl[1622]: No rules Nov 8 00:28:49.962960 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:28:49.963205 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:49.966025 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:49.997687 augenrules[1640]: No rules Nov 8 00:28:49.999678 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:50.001020 sudo[1618]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:50.003036 sshd[1615]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:50.018478 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:34922.service: Deactivated successfully. Nov 8 00:28:50.020418 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:28:50.022136 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:28:50.030730 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:34938.service - OpenSSH per-connection server daemon (10.0.0.1:34938). Nov 8 00:28:50.032553 systemd-logind[1438]: Removed session 8. Nov 8 00:28:50.065919 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 34938 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:28:50.067607 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:50.072652 systemd-logind[1438]: New session 9 of user core. Nov 8 00:28:50.081584 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:28:50.135766 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:28:50.136111 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:50.650974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:50.660954 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:28:50.663943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:50.665007 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:28:50.968274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:50.986727 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:51.086081 kubelet[1683]: E1108 00:28:51.086003 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:51.093242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:51.093491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:51.301212 dockerd[1669]: time="2025-11-08T00:28:51.300969830Z" level=info msg="Starting up" Nov 8 00:28:52.493681 dockerd[1669]: time="2025-11-08T00:28:52.493519699Z" level=info msg="Loading containers: start." Nov 8 00:28:52.702429 kernel: Initializing XFRM netlink socket Nov 8 00:28:52.796828 systemd-networkd[1394]: docker0: Link UP Nov 8 00:28:52.819148 dockerd[1669]: time="2025-11-08T00:28:52.819109505Z" level=info msg="Loading containers: done." Nov 8 00:28:52.837200 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck544125369-merged.mount: Deactivated successfully. Nov 8 00:28:52.840186 dockerd[1669]: time="2025-11-08T00:28:52.840102064Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:28:52.840369 dockerd[1669]: time="2025-11-08T00:28:52.840259096Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:28:52.840481 dockerd[1669]: time="2025-11-08T00:28:52.840445597Z" level=info msg="Daemon has completed initialization" Nov 8 00:28:52.892237 dockerd[1669]: time="2025-11-08T00:28:52.892125027Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:28:52.892704 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:28:53.912415 containerd[1467]: time="2025-11-08T00:28:53.912314867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:28:56.246497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795504258.mount: Deactivated successfully. Nov 8 00:28:57.748987 containerd[1467]: time="2025-11-08T00:28:57.748920320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:57.749909 containerd[1467]: time="2025-11-08T00:28:57.749827640Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 8 00:28:57.751591 containerd[1467]: time="2025-11-08T00:28:57.751560558Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:57.755634 containerd[1467]: time="2025-11-08T00:28:57.755580416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:57.757034 containerd[1467]: time="2025-11-08T00:28:57.756997407Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.844598298s" Nov 8 00:28:57.757095 containerd[1467]: time="2025-11-08T00:28:57.757053893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:28:57.758020 containerd[1467]: time="2025-11-08T00:28:57.757804350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:29:00.172337 containerd[1467]: time="2025-11-08T00:29:00.172239858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:00.174432 containerd[1467]: time="2025-11-08T00:29:00.174377904Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 8 00:29:00.176533 containerd[1467]: time="2025-11-08T00:29:00.176468206Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:00.180028 containerd[1467]: time="2025-11-08T00:29:00.179973520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:00.181428 containerd[1467]: time="2025-11-08T00:29:00.181326486Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.423480675s" Nov 8 00:29:00.181428 containerd[1467]: time="2025-11-08T00:29:00.181417611Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:29:00.182236 containerd[1467]: time="2025-11-08T00:29:00.182026686Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:29:01.267074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:29:01.281256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:01.585874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:01.586913 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:01.659303 kubelet[1906]: E1108 00:29:01.659213 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:01.664331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:01.664638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:02.557075 containerd[1467]: time="2025-11-08T00:29:02.557022570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:02.573590 containerd[1467]: time="2025-11-08T00:29:02.573499259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 8 00:29:02.590263 containerd[1467]: time="2025-11-08T00:29:02.590217503Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:02.609324 containerd[1467]: time="2025-11-08T00:29:02.609267477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:02.610708 containerd[1467]: time="2025-11-08T00:29:02.610635994Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.42857283s" Nov 8 00:29:02.610708 containerd[1467]: time="2025-11-08T00:29:02.610697554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:29:02.611652 containerd[1467]: time="2025-11-08T00:29:02.611622530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:29:03.886190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004978282.mount: Deactivated successfully. Nov 8 00:29:04.297298 containerd[1467]: time="2025-11-08T00:29:04.297234855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:04.298022 containerd[1467]: time="2025-11-08T00:29:04.297973759Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 8 00:29:04.299283 containerd[1467]: time="2025-11-08T00:29:04.299257227Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:04.301804 containerd[1467]: time="2025-11-08T00:29:04.301774989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:04.302621 containerd[1467]: time="2025-11-08T00:29:04.302555435Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.690896359s" Nov 8 00:29:04.302664 containerd[1467]: time="2025-11-08T00:29:04.302621047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:29:04.303244 containerd[1467]: time="2025-11-08T00:29:04.303180395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:29:05.174375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692658069.mount: Deactivated successfully. Nov 8 00:29:06.683889 containerd[1467]: time="2025-11-08T00:29:06.683816524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:06.684811 containerd[1467]: time="2025-11-08T00:29:06.684745786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 8 00:29:06.686264 containerd[1467]: time="2025-11-08T00:29:06.686221614Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:06.689497 containerd[1467]: time="2025-11-08T00:29:06.689456779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:06.690695 containerd[1467]: time="2025-11-08T00:29:06.690642138Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.387417457s" Nov 8 00:29:06.690695 containerd[1467]: time="2025-11-08T00:29:06.690685194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:29:06.691608 containerd[1467]: time="2025-11-08T00:29:06.691580805Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:29:07.362465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245646709.mount: Deactivated successfully. Nov 8 00:29:07.634981 containerd[1467]: time="2025-11-08T00:29:07.634819162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:07.636288 containerd[1467]: time="2025-11-08T00:29:07.636234404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:29:07.637589 containerd[1467]: time="2025-11-08T00:29:07.637559114Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:07.642681 containerd[1467]: time="2025-11-08T00:29:07.642639425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:07.643763 containerd[1467]: time="2025-11-08T00:29:07.643697426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 952.083333ms" Nov 8 00:29:07.643763 containerd[1467]: time="2025-11-08T00:29:07.643750914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:29:07.644460 containerd[1467]: time="2025-11-08T00:29:07.644389201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:29:08.888414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640636725.mount: Deactivated successfully. Nov 8 00:29:11.589440 containerd[1467]: time="2025-11-08T00:29:11.589351473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:11.590153 containerd[1467]: time="2025-11-08T00:29:11.590078598Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 8 00:29:11.592106 containerd[1467]: time="2025-11-08T00:29:11.592039748Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:11.595311 containerd[1467]: time="2025-11-08T00:29:11.595237167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:11.596712 containerd[1467]: time="2025-11-08T00:29:11.596670613Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.952210389s" Nov 8 00:29:11.596712 containerd[1467]: time="2025-11-08T00:29:11.596710556Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:29:11.766956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:29:11.776558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:11.977763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:11.985141 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:12.044455 kubelet[2049]: E1108 00:29:12.044380 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:12.050566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:12.050806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:15.412483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:15.426675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:15.453356 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-9.scope)... Nov 8 00:29:15.453374 systemd[1]: Reloading... Nov 8 00:29:15.526425 zram_generator::config[2122]: No configuration found. Nov 8 00:29:15.947655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:16.032160 systemd[1]: Reloading finished in 578 ms. Nov 8 00:29:16.087157 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:29:16.087332 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:29:16.087744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:16.089561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:16.265061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:16.270072 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:29:16.320078 kubelet[2168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:16.320078 kubelet[2168]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:29:16.320078 kubelet[2168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:16.320615 kubelet[2168]: I1108 00:29:16.320116 2168 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:29:17.366928 kubelet[2168]: I1108 00:29:17.366884 2168 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:29:17.366928 kubelet[2168]: I1108 00:29:17.366921 2168 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:29:17.367665 kubelet[2168]: I1108 00:29:17.367161 2168 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:29:17.394943 kubelet[2168]: E1108 00:29:17.394896 2168 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:29:17.396120 kubelet[2168]: I1108 00:29:17.396092 2168 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:29:17.409937 kubelet[2168]: E1108 00:29:17.409880 2168 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:29:17.409937 kubelet[2168]: I1108 00:29:17.409922 2168 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:29:17.415860 kubelet[2168]: I1108 00:29:17.415829 2168 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:29:17.416140 kubelet[2168]: I1108 00:29:17.416111 2168 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:29:17.416343 kubelet[2168]: I1108 00:29:17.416135 2168 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:29:17.416482 kubelet[2168]: I1108 00:29:17.416348 2168 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:29:17.416482 kubelet[2168]: I1108 00:29:17.416358 2168 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:29:17.416570 kubelet[2168]: I1108 00:29:17.416552 2168 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:17.419209 kubelet[2168]: I1108 00:29:17.419182 2168 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:29:17.419209 kubelet[2168]: I1108 00:29:17.419206 2168 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:29:17.419281 kubelet[2168]: I1108 00:29:17.419268 2168 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:29:17.419316 kubelet[2168]: I1108 00:29:17.419289 2168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:29:17.426326 kubelet[2168]: I1108 00:29:17.426302 2168 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:29:17.426849 kubelet[2168]: I1108 00:29:17.426830 2168 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:29:17.427112 kubelet[2168]: E1108 00:29:17.427068 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:29:17.428301 kubelet[2168]: E1108 00:29:17.428263 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:29:17.428425 kubelet[2168]: W1108 00:29:17.428409 2168 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:29:17.431705 kubelet[2168]: I1108 00:29:17.431682 2168 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:29:17.431759 kubelet[2168]: I1108 00:29:17.431745 2168 server.go:1289] "Started kubelet" Nov 8 00:29:17.431858 kubelet[2168]: I1108 00:29:17.431815 2168 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:29:17.432995 kubelet[2168]: I1108 00:29:17.432977 2168 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:29:17.434276 kubelet[2168]: I1108 00:29:17.434248 2168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:29:17.436568 kubelet[2168]: E1108 00:29:17.435789 2168 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:29:17.436568 kubelet[2168]: I1108 00:29:17.436040 2168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:29:17.437475 kubelet[2168]: I1108 00:29:17.437093 2168 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:29:17.437638 kubelet[2168]: E1108 00:29:17.436364 2168 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0947667ba0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:29:17.431708173 +0000 UTC m=+1.157319429,LastTimestamp:2025-11-08 00:29:17.431708173 +0000 UTC m=+1.157319429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:29:17.438080 kubelet[2168]: E1108 00:29:17.438050 2168 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:29:17.438675 kubelet[2168]: I1108 00:29:17.438659 2168 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:29:17.439529 kubelet[2168]: I1108 00:29:17.439508 2168 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:29:17.439686 kubelet[2168]: I1108 00:29:17.439674 2168 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:29:17.440055 kubelet[2168]: I1108 00:29:17.440020 2168 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:29:17.440199 kubelet[2168]: I1108 00:29:17.440142 2168 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:29:17.440424 kubelet[2168]: E1108 00:29:17.440363 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:29:17.440578 kubelet[2168]: I1108 00:29:17.440503 2168 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:29:17.443682 kubelet[2168]: E1108 00:29:17.443612 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Nov 8 00:29:17.443884 kubelet[2168]: I1108 00:29:17.443863 2168 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:29:17.462437 kubelet[2168]: I1108 00:29:17.462353 2168 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:29:17.465655 kubelet[2168]: I1108 00:29:17.465619 2168 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:29:17.465655 kubelet[2168]: I1108 00:29:17.465643 2168 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:29:17.465750 kubelet[2168]: I1108 00:29:17.465663 2168 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:17.466212 kubelet[2168]: I1108 00:29:17.466191 2168 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:29:17.466262 kubelet[2168]: I1108 00:29:17.466232 2168 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:29:17.466287 kubelet[2168]: I1108 00:29:17.466271 2168 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:29:17.466308 kubelet[2168]: I1108 00:29:17.466288 2168 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:29:17.466377 kubelet[2168]: E1108 00:29:17.466357 2168 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:29:17.469034 kubelet[2168]: E1108 00:29:17.468101 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:29:17.470346 kubelet[2168]: I1108 00:29:17.470321 2168 policy_none.go:49] "None policy: Start" Nov 8 00:29:17.470423 kubelet[2168]: I1108 00:29:17.470358 2168 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:29:17.470423 kubelet[2168]: I1108 00:29:17.470384 2168 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:29:17.477068 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:29:17.490111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:29:17.493362 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:29:17.503728 kubelet[2168]: E1108 00:29:17.503421 2168 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:29:17.503728 kubelet[2168]: I1108 00:29:17.503706 2168 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:29:17.503848 kubelet[2168]: I1108 00:29:17.503737 2168 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:29:17.504106 kubelet[2168]: I1108 00:29:17.504076 2168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:29:17.505201 kubelet[2168]: E1108 00:29:17.505146 2168 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:29:17.505260 kubelet[2168]: E1108 00:29:17.505217 2168 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:29:17.578095 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 8 00:29:17.590437 kubelet[2168]: E1108 00:29:17.590385 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:17.593308 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 8 00:29:17.595331 kubelet[2168]: E1108 00:29:17.595277 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:17.605140 kubelet[2168]: I1108 00:29:17.605118 2168 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:17.605773 kubelet[2168]: E1108 00:29:17.605497 2168 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 8 00:29:17.608856 systemd[1]: Created slice kubepods-burstable-pod22377ccb1f7bd795812eb4285787063a.slice - libcontainer container kubepods-burstable-pod22377ccb1f7bd795812eb4285787063a.slice. Nov 8 00:29:17.610795 kubelet[2168]: E1108 00:29:17.610755 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:17.641123 kubelet[2168]: I1108 00:29:17.641026 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:17.641123 kubelet[2168]: I1108 00:29:17.641064 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:17.641123 kubelet[2168]: I1108 00:29:17.641096 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:17.641123 kubelet[2168]: I1108 00:29:17.641117 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:17.641450 kubelet[2168]: I1108 00:29:17.641137 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22377ccb1f7bd795812eb4285787063a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22377ccb1f7bd795812eb4285787063a\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:17.641450 kubelet[2168]: I1108 00:29:17.641338 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22377ccb1f7bd795812eb4285787063a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22377ccb1f7bd795812eb4285787063a\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:17.641450 kubelet[2168]: I1108 00:29:17.641361 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:17.641450 kubelet[2168]: I1108 00:29:17.641379 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:17.641450 kubelet[2168]: I1108 00:29:17.641418 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22377ccb1f7bd795812eb4285787063a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22377ccb1f7bd795812eb4285787063a\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:17.644550 kubelet[2168]: E1108 00:29:17.644522 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Nov 8 00:29:17.807749 kubelet[2168]: I1108 00:29:17.807694 2168 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:17.808106 kubelet[2168]: E1108 00:29:17.808069 2168 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 8 00:29:17.891803 kubelet[2168]: E1108 00:29:17.891639 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:17.892591 containerd[1467]: time="2025-11-08T00:29:17.892538608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:17.895718 kubelet[2168]: E1108 00:29:17.895685 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:17.896059 containerd[1467]: time="2025-11-08T00:29:17.896033836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:17.911596 kubelet[2168]: E1108 00:29:17.911566 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:17.912029 containerd[1467]: time="2025-11-08T00:29:17.912004290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22377ccb1f7bd795812eb4285787063a,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:18.045093 kubelet[2168]: E1108 00:29:18.045043 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Nov 8 00:29:18.209954 kubelet[2168]: I1108 00:29:18.209842 2168 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:18.210192 kubelet[2168]: E1108 00:29:18.210165 2168 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 8 00:29:18.432933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591502239.mount: Deactivated successfully. Nov 8 00:29:18.443466 containerd[1467]: time="2025-11-08T00:29:18.443391599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:18.444411 containerd[1467]: time="2025-11-08T00:29:18.444370856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:18.445336 containerd[1467]: time="2025-11-08T00:29:18.445273483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:18.445993 containerd[1467]: time="2025-11-08T00:29:18.445913854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:29:18.446971 containerd[1467]: time="2025-11-08T00:29:18.446909329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:29:18.447694 containerd[1467]: time="2025-11-08T00:29:18.447668349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:29:18.448558 containerd[1467]: time="2025-11-08T00:29:18.448519696Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:18.451837 containerd[1467]: time="2025-11-08T00:29:18.451794216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.153675ms" Nov 8 00:29:18.453132 containerd[1467]: time="2025-11-08T00:29:18.453097221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 557.011813ms" Nov 8 00:29:18.453586 containerd[1467]: time="2025-11-08T00:29:18.453530628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:18.457087 containerd[1467]: time="2025-11-08T00:29:18.457055805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.997979ms" Nov 8 00:29:18.653521 containerd[1467]: time="2025-11-08T00:29:18.653163579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:18.653521 containerd[1467]: time="2025-11-08T00:29:18.653251766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:18.653521 containerd[1467]: time="2025-11-08T00:29:18.653269919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.653521 containerd[1467]: time="2025-11-08T00:29:18.653387986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.655062 containerd[1467]: time="2025-11-08T00:29:18.654736538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:18.655062 containerd[1467]: time="2025-11-08T00:29:18.654808366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:18.655062 containerd[1467]: time="2025-11-08T00:29:18.654830509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.655062 containerd[1467]: time="2025-11-08T00:29:18.654956224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.665422 containerd[1467]: time="2025-11-08T00:29:18.662693851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:18.665422 containerd[1467]: time="2025-11-08T00:29:18.662806264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:18.665422 containerd[1467]: time="2025-11-08T00:29:18.662841076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.665422 containerd[1467]: time="2025-11-08T00:29:18.663228555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.693642 systemd[1]: Started cri-containerd-7a96b2006f6869440ea6581874c14791eec40e8dc71c692e0cf71db6bd44d52d.scope - libcontainer container 7a96b2006f6869440ea6581874c14791eec40e8dc71c692e0cf71db6bd44d52d. Nov 8 00:29:18.700211 systemd[1]: Started cri-containerd-b7aab06273e5f0bab89820fa55078c2ee6a823c1dfa801747d43558d757cb234.scope - libcontainer container b7aab06273e5f0bab89820fa55078c2ee6a823c1dfa801747d43558d757cb234. Nov 8 00:29:18.704092 systemd[1]: Started cri-containerd-11a1c5cea3cbe573b85943611dcfb1a3e0713ef3d8c6509363053e585138d3f1.scope - libcontainer container 11a1c5cea3cbe573b85943611dcfb1a3e0713ef3d8c6509363053e585138d3f1. Nov 8 00:29:18.706384 kubelet[2168]: E1108 00:29:18.706340 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:29:18.763768 containerd[1467]: time="2025-11-08T00:29:18.763700161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a96b2006f6869440ea6581874c14791eec40e8dc71c692e0cf71db6bd44d52d\"" Nov 8 00:29:18.765361 kubelet[2168]: E1108 00:29:18.765321 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:18.773771 containerd[1467]: time="2025-11-08T00:29:18.773727950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22377ccb1f7bd795812eb4285787063a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7aab06273e5f0bab89820fa55078c2ee6a823c1dfa801747d43558d757cb234\"" Nov 8 00:29:18.774286 containerd[1467]: time="2025-11-08T00:29:18.774151955Z" level=info msg="CreateContainer within sandbox \"7a96b2006f6869440ea6581874c14791eec40e8dc71c692e0cf71db6bd44d52d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:29:18.774695 kubelet[2168]: E1108 00:29:18.774673 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:18.778168 containerd[1467]: time="2025-11-08T00:29:18.778115801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"11a1c5cea3cbe573b85943611dcfb1a3e0713ef3d8c6509363053e585138d3f1\"" Nov 8 00:29:18.778741 kubelet[2168]: E1108 00:29:18.778710 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:18.780775 containerd[1467]: time="2025-11-08T00:29:18.780735034Z" level=info msg="CreateContainer within sandbox \"b7aab06273e5f0bab89820fa55078c2ee6a823c1dfa801747d43558d757cb234\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:29:18.783772 containerd[1467]: time="2025-11-08T00:29:18.783739220Z" level=info msg="CreateContainer within sandbox \"11a1c5cea3cbe573b85943611dcfb1a3e0713ef3d8c6509363053e585138d3f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:29:18.799456 containerd[1467]: time="2025-11-08T00:29:18.799421219Z" level=info msg="CreateContainer within sandbox \"7a96b2006f6869440ea6581874c14791eec40e8dc71c692e0cf71db6bd44d52d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa91ac6feeb6cbb60badbd4ee6a1f57a1ead846c04ec3ae6abf5736b97d14b56\"" Nov 8 00:29:18.799977 containerd[1467]: time="2025-11-08T00:29:18.799941699Z" level=info msg="StartContainer for \"aa91ac6feeb6cbb60badbd4ee6a1f57a1ead846c04ec3ae6abf5736b97d14b56\"" Nov 8 00:29:18.802094 containerd[1467]: time="2025-11-08T00:29:18.802056680Z" level=info msg="CreateContainer within sandbox \"b7aab06273e5f0bab89820fa55078c2ee6a823c1dfa801747d43558d757cb234\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"65fe93985fb335dd942fde412c20a025987b40f129413b5ebb9bf27fcc267e65\"" Nov 8 00:29:18.803453 containerd[1467]: time="2025-11-08T00:29:18.802639085Z" level=info msg="StartContainer for \"65fe93985fb335dd942fde412c20a025987b40f129413b5ebb9bf27fcc267e65\"" Nov 8 00:29:18.812012 containerd[1467]: time="2025-11-08T00:29:18.811955155Z" level=info msg="CreateContainer within sandbox \"11a1c5cea3cbe573b85943611dcfb1a3e0713ef3d8c6509363053e585138d3f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16e8210a0e3f780a5a548f7368affc4db7fe1aaee2d443680719f6c3c2195b0b\"" Nov 8 00:29:18.813834 containerd[1467]: time="2025-11-08T00:29:18.813812631Z" level=info msg="StartContainer for \"16e8210a0e3f780a5a548f7368affc4db7fe1aaee2d443680719f6c3c2195b0b\"" Nov 8 00:29:18.816637 kubelet[2168]: E1108 00:29:18.816600 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:29:18.841600 systemd[1]: Started cri-containerd-65fe93985fb335dd942fde412c20a025987b40f129413b5ebb9bf27fcc267e65.scope - libcontainer container 65fe93985fb335dd942fde412c20a025987b40f129413b5ebb9bf27fcc267e65. Nov 8 00:29:18.844323 systemd[1]: Started cri-containerd-aa91ac6feeb6cbb60badbd4ee6a1f57a1ead846c04ec3ae6abf5736b97d14b56.scope - libcontainer container aa91ac6feeb6cbb60badbd4ee6a1f57a1ead846c04ec3ae6abf5736b97d14b56. Nov 8 00:29:18.847304 kubelet[2168]: E1108 00:29:18.847249 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Nov 8 00:29:18.851163 systemd[1]: Started cri-containerd-16e8210a0e3f780a5a548f7368affc4db7fe1aaee2d443680719f6c3c2195b0b.scope - libcontainer container 16e8210a0e3f780a5a548f7368affc4db7fe1aaee2d443680719f6c3c2195b0b. Nov 8 00:29:18.931602 containerd[1467]: time="2025-11-08T00:29:18.930479771Z" level=info msg="StartContainer for \"16e8210a0e3f780a5a548f7368affc4db7fe1aaee2d443680719f6c3c2195b0b\" returns successfully" Nov 8 00:29:18.931602 containerd[1467]: time="2025-11-08T00:29:18.930440179Z" level=info msg="StartContainer for \"aa91ac6feeb6cbb60badbd4ee6a1f57a1ead846c04ec3ae6abf5736b97d14b56\" returns successfully" Nov 8 00:29:18.931602 containerd[1467]: time="2025-11-08T00:29:18.930643787Z" level=info msg="StartContainer for \"65fe93985fb335dd942fde412c20a025987b40f129413b5ebb9bf27fcc267e65\" returns successfully" Nov 8 00:29:19.011770 kubelet[2168]: I1108 00:29:19.011727 2168 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:19.480796 kubelet[2168]: E1108 00:29:19.480741 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:19.481766 kubelet[2168]: E1108 00:29:19.481728 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:19.482702 kubelet[2168]: E1108 00:29:19.482160 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:19.482702 kubelet[2168]: E1108 00:29:19.482259 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:19.484181 kubelet[2168]: E1108 00:29:19.484150 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:19.484295 kubelet[2168]: E1108 00:29:19.484267 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:20.486799 kubelet[2168]: E1108 00:29:20.486752 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:20.487222 kubelet[2168]: E1108 00:29:20.486883 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:20.487222 kubelet[2168]: E1108 00:29:20.486908 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:20.487222 kubelet[2168]: E1108 00:29:20.487009 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:20.487222 kubelet[2168]: E1108 00:29:20.487041 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:20.487222 kubelet[2168]: E1108 00:29:20.487125 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:20.714808 kubelet[2168]: I1108 00:29:20.714757 2168 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:29:20.714808 kubelet[2168]: E1108 00:29:20.714804 2168 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:29:20.742481 kubelet[2168]: I1108 00:29:20.742337 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:20.763061 kubelet[2168]: E1108 00:29:20.762991 2168 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:20.763061 kubelet[2168]: I1108 00:29:20.763038 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:20.764662 kubelet[2168]: E1108 00:29:20.764640 2168 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:20.764662 kubelet[2168]: I1108 00:29:20.764661 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:20.765824 kubelet[2168]: E1108 00:29:20.765791 2168 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:20.963721 kubelet[2168]: E1108 00:29:20.963658 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 8 00:29:21.431605 kubelet[2168]: I1108 00:29:21.431556 2168 apiserver.go:52] "Watching apiserver" Nov 8 00:29:21.440577 kubelet[2168]: I1108 00:29:21.440555 2168 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:29:21.686622 update_engine[1441]: I20251108 00:29:21.686411 1441 update_attempter.cc:509] Updating boot flags... Nov 8 00:29:21.731112 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2464) Nov 8 00:29:21.773500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2463) Nov 8 00:29:21.824518 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2463) Nov 8 00:29:22.740046 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-9.scope)... Nov 8 00:29:22.740061 systemd[1]: Reloading... Nov 8 00:29:22.816816 zram_generator::config[2516]: No configuration found. Nov 8 00:29:22.827628 kubelet[2168]: I1108 00:29:22.827571 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:22.834576 kubelet[2168]: E1108 00:29:22.834535 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:22.927184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:23.019229 systemd[1]: Reloading finished in 278 ms. Nov 8 00:29:23.069557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:23.070134 kubelet[2168]: I1108 00:29:23.069566 2168 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:29:23.098364 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:29:23.098725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:23.098782 systemd[1]: kubelet.service: Consumed 1.002s CPU time, 131.7M memory peak, 0B memory swap peak. Nov 8 00:29:23.114721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:23.299614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:23.305574 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:29:23.362779 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:23.362779 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:29:23.362779 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:23.363205 kubelet[2558]: I1108 00:29:23.362826 2558 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:29:23.370327 kubelet[2558]: I1108 00:29:23.369570 2558 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:29:23.370327 kubelet[2558]: I1108 00:29:23.369598 2558 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:29:23.370327 kubelet[2558]: I1108 00:29:23.369833 2558 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:29:23.372555 kubelet[2558]: I1108 00:29:23.372528 2558 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:29:23.374795 kubelet[2558]: I1108 00:29:23.374762 2558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:29:23.378949 kubelet[2558]: E1108 00:29:23.378906 2558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:29:23.378949 kubelet[2558]: I1108 00:29:23.378947 2558 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:29:23.385038 kubelet[2558]: I1108 00:29:23.384985 2558 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:29:23.385403 kubelet[2558]: I1108 00:29:23.385346 2558 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:29:23.385589 kubelet[2558]: I1108 00:29:23.385388 2558 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:29:23.385589 kubelet[2558]: I1108 00:29:23.385585 2558 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:29:23.385710 kubelet[2558]: I1108 00:29:23.385593 2558 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:29:23.385710 kubelet[2558]: I1108 00:29:23.385646 2558 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:23.385866 kubelet[2558]: I1108 00:29:23.385842 2558 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:29:23.385866 kubelet[2558]: I1108 00:29:23.385860 2558 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:29:23.385916 kubelet[2558]: I1108 00:29:23.385891 2558 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:29:23.385940 kubelet[2558]: I1108 00:29:23.385922 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:29:23.387035 kubelet[2558]: I1108 00:29:23.387011 2558 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:29:23.387859 kubelet[2558]: I1108 00:29:23.387839 2558 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:29:23.393847 kubelet[2558]: I1108 00:29:23.393818 2558 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:29:23.394024 kubelet[2558]: I1108 00:29:23.393964 2558 server.go:1289] "Started kubelet" Nov 8 00:29:23.394180 kubelet[2558]: I1108 00:29:23.394128 2558 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:29:23.395615 kubelet[2558]: I1108 00:29:23.394624 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:29:23.395615 kubelet[2558]: I1108 00:29:23.394988 2558 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:29:23.395615 kubelet[2558]: I1108 00:29:23.395351 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:29:23.397761 kubelet[2558]: I1108 00:29:23.396387 2558 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:29:23.400370 kubelet[2558]: I1108 00:29:23.400341 2558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:29:23.401537 kubelet[2558]: E1108 00:29:23.401495 2558 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:29:23.403943 kubelet[2558]: I1108 00:29:23.403902 2558 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:29:23.404065 kubelet[2558]: I1108 00:29:23.404042 2558 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:29:23.404292 kubelet[2558]: I1108 00:29:23.404262 2558 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:29:23.404497 kubelet[2558]: I1108 00:29:23.404459 2558 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:29:23.404633 kubelet[2558]: I1108 00:29:23.404592 2558 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:29:23.407586 kubelet[2558]: I1108 00:29:23.407556 2558 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:29:23.415232 kubelet[2558]: I1108 00:29:23.415190 2558 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:29:23.416998 kubelet[2558]: I1108 00:29:23.416969 2558 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:29:23.416998 kubelet[2558]: I1108 00:29:23.416993 2558 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:29:23.417113 kubelet[2558]: I1108 00:29:23.417030 2558 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:29:23.417113 kubelet[2558]: I1108 00:29:23.417043 2558 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:29:23.417113 kubelet[2558]: E1108 00:29:23.417096 2558 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:29:23.443518 kubelet[2558]: I1108 00:29:23.443482 2558 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:29:23.443518 kubelet[2558]: I1108 00:29:23.443502 2558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:29:23.443518 kubelet[2558]: I1108 00:29:23.443523 2558 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:23.443715 kubelet[2558]: I1108 00:29:23.443650 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:29:23.443715 kubelet[2558]: I1108 00:29:23.443662 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:29:23.443715 kubelet[2558]: I1108 00:29:23.443679 2558 policy_none.go:49] "None policy: Start" Nov 8 00:29:23.443715 kubelet[2558]: I1108 00:29:23.443690 2558 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:29:23.443715 kubelet[2558]: I1108 00:29:23.443701 2558 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:29:23.443817 kubelet[2558]: I1108 00:29:23.443795 2558 state_mem.go:75] "Updated machine memory state" Nov 8 00:29:23.447820 kubelet[2558]: E1108 00:29:23.447791 2558 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:29:23.448146 kubelet[2558]: I1108 00:29:23.447984 2558 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:29:23.448146 kubelet[2558]: I1108 00:29:23.447999 2558 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:29:23.448206 kubelet[2558]: I1108 00:29:23.448195 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:29:23.449523 kubelet[2558]: E1108 00:29:23.449474 2558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:29:23.518665 kubelet[2558]: I1108 00:29:23.518610 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:23.518792 kubelet[2558]: I1108 00:29:23.518684 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:23.518792 kubelet[2558]: I1108 00:29:23.518636 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:23.554483 kubelet[2558]: I1108 00:29:23.554345 2558 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:23.605460 kubelet[2558]: I1108 00:29:23.605419 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:23.605460 kubelet[2558]: I1108 00:29:23.605456 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:23.605549 kubelet[2558]: I1108 00:29:23.605474 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:23.605549 kubelet[2558]: I1108 00:29:23.605505 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:23.605549 kubelet[2558]: I1108 00:29:23.605520 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22377ccb1f7bd795812eb4285787063a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22377ccb1f7bd795812eb4285787063a\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:23.605549 kubelet[2558]: I1108 00:29:23.605538 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:23.605651 kubelet[2558]: I1108 00:29:23.605558 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22377ccb1f7bd795812eb4285787063a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22377ccb1f7bd795812eb4285787063a\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:23.605651 kubelet[2558]: I1108 00:29:23.605573 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22377ccb1f7bd795812eb4285787063a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22377ccb1f7bd795812eb4285787063a\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:23.605651 kubelet[2558]: I1108 00:29:23.605595 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:23.614628 kubelet[2558]: E1108 00:29:23.614565 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:23.645518 kubelet[2558]: I1108 00:29:23.645479 2558 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:29:23.645679 kubelet[2558]: I1108 00:29:23.645567 2558 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:29:23.758227 sudo[2598]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 8 00:29:23.758629 sudo[2598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 8 00:29:23.909673 kubelet[2558]: E1108 00:29:23.909565 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:23.909673 kubelet[2558]: E1108 00:29:23.909565 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:23.915329 kubelet[2558]: E1108 00:29:23.915296 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:24.245079 sudo[2598]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:24.387628 kubelet[2558]: I1108 00:29:24.387576 2558 apiserver.go:52] "Watching apiserver" Nov 8 00:29:24.404693 kubelet[2558]: I1108 00:29:24.404639 2558 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:29:24.428185 kubelet[2558]: I1108 00:29:24.428140 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:24.428339 kubelet[2558]: E1108 00:29:24.428225 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:24.431424 kubelet[2558]: I1108 00:29:24.428557 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:24.436431 kubelet[2558]: E1108 00:29:24.436379 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:24.436636 kubelet[2558]: E1108 00:29:24.436585 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:24.438707 kubelet[2558]: E1108 00:29:24.438038 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:24.438707 kubelet[2558]: E1108 00:29:24.438141 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:24.453001 kubelet[2558]: I1108 00:29:24.452931 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.452882229 podStartE2EDuration="2.452882229s" podCreationTimestamp="2025-11-08 00:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:24.452568041 +0000 UTC m=+1.137730761" watchObservedRunningTime="2025-11-08 00:29:24.452882229 +0000 UTC m=+1.138044939" Nov 8 00:29:24.462193 kubelet[2558]: I1108 00:29:24.462124 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.462100593 podStartE2EDuration="1.462100593s" podCreationTimestamp="2025-11-08 00:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:24.461906472 +0000 UTC m=+1.147069202" watchObservedRunningTime="2025-11-08 00:29:24.462100593 +0000 UTC m=+1.147263293" Nov 8 00:29:24.476689 kubelet[2558]: I1108 00:29:24.476583 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4765570559999999 podStartE2EDuration="1.476557056s" podCreationTimestamp="2025-11-08 00:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:24.469450214 +0000 UTC m=+1.154612924" watchObservedRunningTime="2025-11-08 00:29:24.476557056 +0000 UTC m=+1.161719776" Nov 8 00:29:25.429776 kubelet[2558]: E1108 00:29:25.429716 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:25.430286 kubelet[2558]: E1108 00:29:25.429919 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:25.430286 kubelet[2558]: E1108 00:29:25.430207 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:25.526249 sudo[1651]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:25.528182 sshd[1648]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:25.533351 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:34938.service: Deactivated successfully. Nov 8 00:29:25.536463 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:29:25.536722 systemd[1]: session-9.scope: Consumed 6.293s CPU time, 159.1M memory peak, 0B memory swap peak. Nov 8 00:29:25.537314 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:29:25.538676 systemd-logind[1438]: Removed session 9. Nov 8 00:29:26.431626 kubelet[2558]: E1108 00:29:26.431581 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:26.840524 kubelet[2558]: E1108 00:29:26.840478 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:28.116510 kubelet[2558]: I1108 00:29:28.116453 2558 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:29:28.118905 containerd[1467]: time="2025-11-08T00:29:28.118852187Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:29:28.119215 kubelet[2558]: I1108 00:29:28.119068 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:29:29.420372 kubelet[2558]: E1108 00:29:29.420181 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.426660 systemd[1]: Created slice kubepods-besteffort-pod7bbbc2a2_ce9b_4935_908d_fad37e9ad9e0.slice - libcontainer container kubepods-besteffort-pod7bbbc2a2_ce9b_4935_908d_fad37e9ad9e0.slice. Nov 8 00:29:29.435779 kubelet[2558]: E1108 00:29:29.435757 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.442706 kubelet[2558]: I1108 00:29:29.442545 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zd9fj\" (UID: \"7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0\") " pod="kube-system/cilium-operator-6c4d7847fc-zd9fj" Nov 8 00:29:29.442706 kubelet[2558]: I1108 00:29:29.442602 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j85s\" (UniqueName: \"kubernetes.io/projected/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-kube-api-access-8j85s\") pod \"cilium-operator-6c4d7847fc-zd9fj\" (UID: \"7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0\") " pod="kube-system/cilium-operator-6c4d7847fc-zd9fj" Nov 8 00:29:29.492063 systemd[1]: Created slice kubepods-besteffort-pod94f34db2_ef82_4ba0_8c4a_4ef310657872.slice - libcontainer container kubepods-besteffort-pod94f34db2_ef82_4ba0_8c4a_4ef310657872.slice. Nov 8 00:29:29.519812 systemd[1]: Created slice kubepods-burstable-pod4cc99aa1_5f3d_4a28_aa6f_c204c823ce46.slice - libcontainer container kubepods-burstable-pod4cc99aa1_5f3d_4a28_aa6f_c204c823ce46.slice. Nov 8 00:29:29.543588 kubelet[2558]: I1108 00:29:29.543547 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94f34db2-ef82-4ba0-8c4a-4ef310657872-xtables-lock\") pod \"kube-proxy-mzv4v\" (UID: \"94f34db2-ef82-4ba0-8c4a-4ef310657872\") " pod="kube-system/kube-proxy-mzv4v" Nov 8 00:29:29.543588 kubelet[2558]: I1108 00:29:29.543582 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqzt\" (UniqueName: \"kubernetes.io/projected/94f34db2-ef82-4ba0-8c4a-4ef310657872-kube-api-access-lxqzt\") pod \"kube-proxy-mzv4v\" (UID: \"94f34db2-ef82-4ba0-8c4a-4ef310657872\") " pod="kube-system/kube-proxy-mzv4v" Nov 8 00:29:29.543779 kubelet[2558]: I1108 00:29:29.543612 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-run\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543779 kubelet[2558]: I1108 00:29:29.543647 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94f34db2-ef82-4ba0-8c4a-4ef310657872-kube-proxy\") pod \"kube-proxy-mzv4v\" (UID: \"94f34db2-ef82-4ba0-8c4a-4ef310657872\") " pod="kube-system/kube-proxy-mzv4v" Nov 8 00:29:29.543779 kubelet[2558]: I1108 00:29:29.543681 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-etc-cni-netd\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543779 kubelet[2558]: I1108 00:29:29.543697 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-net\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543779 kubelet[2558]: I1108 00:29:29.543715 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-bpf-maps\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543779 kubelet[2558]: I1108 00:29:29.543732 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hostproc\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543983 kubelet[2558]: I1108 00:29:29.543815 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-cgroup\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543983 kubelet[2558]: I1108 00:29:29.543903 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-lib-modules\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543983 kubelet[2558]: I1108 00:29:29.543932 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-kernel\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543983 kubelet[2558]: I1108 00:29:29.543950 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-xtables-lock\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.543983 kubelet[2558]: I1108 00:29:29.543967 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-config-path\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.544120 kubelet[2558]: I1108 00:29:29.544000 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94f34db2-ef82-4ba0-8c4a-4ef310657872-lib-modules\") pod \"kube-proxy-mzv4v\" (UID: \"94f34db2-ef82-4ba0-8c4a-4ef310657872\") " pod="kube-system/kube-proxy-mzv4v" Nov 8 00:29:29.544120 kubelet[2558]: I1108 00:29:29.544060 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cni-path\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.544120 kubelet[2558]: I1108 00:29:29.544098 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-clustermesh-secrets\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.544204 kubelet[2558]: I1108 00:29:29.544146 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hubble-tls\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.544204 kubelet[2558]: I1108 00:29:29.544173 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74fxf\" (UniqueName: \"kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-kube-api-access-74fxf\") pod \"cilium-7hpm8\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " pod="kube-system/cilium-7hpm8" Nov 8 00:29:29.736736 kubelet[2558]: E1108 00:29:29.736576 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.737875 containerd[1467]: time="2025-11-08T00:29:29.737825215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zd9fj,Uid:7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:29.769936 containerd[1467]: time="2025-11-08T00:29:29.769607510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:29.769936 containerd[1467]: time="2025-11-08T00:29:29.769687252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:29.769936 containerd[1467]: time="2025-11-08T00:29:29.769702885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.769936 containerd[1467]: time="2025-11-08T00:29:29.769799062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.789536 systemd[1]: Started cri-containerd-878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85.scope - libcontainer container 878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85. Nov 8 00:29:29.795485 kubelet[2558]: E1108 00:29:29.795444 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.798630 containerd[1467]: time="2025-11-08T00:29:29.798595838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzv4v,Uid:94f34db2-ef82-4ba0-8c4a-4ef310657872,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:29.822746 kubelet[2558]: E1108 00:29:29.822631 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.825820 containerd[1467]: time="2025-11-08T00:29:29.825778905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hpm8,Uid:4cc99aa1-5f3d-4a28-aa6f-c204c823ce46,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:29.834780 containerd[1467]: time="2025-11-08T00:29:29.834686607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:29.834974 containerd[1467]: time="2025-11-08T00:29:29.834906840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:29.835196 containerd[1467]: time="2025-11-08T00:29:29.834925461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.835386 containerd[1467]: time="2025-11-08T00:29:29.835330361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.837874 containerd[1467]: time="2025-11-08T00:29:29.837839416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zd9fj,Uid:7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85\"" Nov 8 00:29:29.838885 kubelet[2558]: E1108 00:29:29.838656 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.840524 containerd[1467]: time="2025-11-08T00:29:29.840495818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 8 00:29:29.858556 containerd[1467]: time="2025-11-08T00:29:29.858372064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:29.858556 containerd[1467]: time="2025-11-08T00:29:29.858515492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:29.858556 containerd[1467]: time="2025-11-08T00:29:29.858528811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.860905 systemd[1]: Started cri-containerd-8b9df451d4e301a5bb8a4bd98d34fa93e9c674adb75d24b4f230f006d86204d2.scope - libcontainer container 8b9df451d4e301a5bb8a4bd98d34fa93e9c674adb75d24b4f230f006d86204d2. Nov 8 00:29:29.861619 containerd[1467]: time="2025-11-08T00:29:29.858628404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.884563 systemd[1]: Started cri-containerd-f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6.scope - libcontainer container f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6. Nov 8 00:29:29.906446 containerd[1467]: time="2025-11-08T00:29:29.906373001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzv4v,Uid:94f34db2-ef82-4ba0-8c4a-4ef310657872,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b9df451d4e301a5bb8a4bd98d34fa93e9c674adb75d24b4f230f006d86204d2\"" Nov 8 00:29:29.909435 kubelet[2558]: E1108 00:29:29.908257 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.917465 containerd[1467]: time="2025-11-08T00:29:29.917431448Z" level=info msg="CreateContainer within sandbox \"8b9df451d4e301a5bb8a4bd98d34fa93e9c674adb75d24b4f230f006d86204d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:29:29.924133 containerd[1467]: time="2025-11-08T00:29:29.924074981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hpm8,Uid:4cc99aa1-5f3d-4a28-aa6f-c204c823ce46,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\"" Nov 8 00:29:29.926031 kubelet[2558]: E1108 00:29:29.925994 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:29.943044 containerd[1467]: time="2025-11-08T00:29:29.942988077Z" level=info msg="CreateContainer within sandbox \"8b9df451d4e301a5bb8a4bd98d34fa93e9c674adb75d24b4f230f006d86204d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79721dc41362f2c7ff4b03b260b28c6eb5d627973d77aebaa69d83b2757f9030\"" Nov 8 00:29:29.943717 containerd[1467]: time="2025-11-08T00:29:29.943639658Z" level=info msg="StartContainer for \"79721dc41362f2c7ff4b03b260b28c6eb5d627973d77aebaa69d83b2757f9030\"" Nov 8 00:29:29.978723 systemd[1]: Started cri-containerd-79721dc41362f2c7ff4b03b260b28c6eb5d627973d77aebaa69d83b2757f9030.scope - libcontainer container 79721dc41362f2c7ff4b03b260b28c6eb5d627973d77aebaa69d83b2757f9030. Nov 8 00:29:30.056416 containerd[1467]: time="2025-11-08T00:29:30.056356522Z" level=info msg="StartContainer for \"79721dc41362f2c7ff4b03b260b28c6eb5d627973d77aebaa69d83b2757f9030\" returns successfully" Nov 8 00:29:30.439992 kubelet[2558]: E1108 00:29:30.439841 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:31.299250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071344755.mount: Deactivated successfully. Nov 8 00:29:31.815971 containerd[1467]: time="2025-11-08T00:29:31.815906061Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:31.816598 containerd[1467]: time="2025-11-08T00:29:31.816523906Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 8 00:29:31.817611 containerd[1467]: time="2025-11-08T00:29:31.817556251Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:31.818781 containerd[1467]: time="2025-11-08T00:29:31.818754391Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.978225802s" Nov 8 00:29:31.818827 containerd[1467]: time="2025-11-08T00:29:31.818787781Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 8 00:29:31.820545 containerd[1467]: time="2025-11-08T00:29:31.820147614Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 8 00:29:31.823726 containerd[1467]: time="2025-11-08T00:29:31.823676531Z" level=info msg="CreateContainer within sandbox \"878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 8 00:29:31.897026 containerd[1467]: time="2025-11-08T00:29:31.896966396Z" level=info msg="CreateContainer within sandbox \"878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\"" Nov 8 00:29:31.898500 containerd[1467]: time="2025-11-08T00:29:31.897630588Z" level=info msg="StartContainer for \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\"" Nov 8 00:29:31.928608 systemd[1]: Started cri-containerd-8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d.scope - libcontainer container 8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d. Nov 8 00:29:31.958890 containerd[1467]: time="2025-11-08T00:29:31.958839214Z" level=info msg="StartContainer for \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\" returns successfully" Nov 8 00:29:32.449553 kubelet[2558]: E1108 00:29:32.448822 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:32.462078 kubelet[2558]: I1108 00:29:32.462019 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mzv4v" podStartSLOduration=3.462001261 podStartE2EDuration="3.462001261s" podCreationTimestamp="2025-11-08 00:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:30.449244446 +0000 UTC m=+7.134407156" watchObservedRunningTime="2025-11-08 00:29:32.462001261 +0000 UTC m=+9.147163971" Nov 8 00:29:33.451231 kubelet[2558]: E1108 00:29:33.451182 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:36.402363 kubelet[2558]: I1108 00:29:36.402298 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zd9fj" podStartSLOduration=5.422714192 podStartE2EDuration="7.402280738s" podCreationTimestamp="2025-11-08 00:29:29 +0000 UTC" firstStartedPulling="2025-11-08 00:29:29.840005133 +0000 UTC m=+6.525167843" lastFinishedPulling="2025-11-08 00:29:31.819571679 +0000 UTC m=+8.504734389" observedRunningTime="2025-11-08 00:29:32.462563711 +0000 UTC m=+9.147726421" watchObservedRunningTime="2025-11-08 00:29:36.402280738 +0000 UTC m=+13.087443448" Nov 8 00:29:36.409004 kubelet[2558]: E1108 00:29:36.408962 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:36.457009 kubelet[2558]: E1108 00:29:36.456967 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:36.847875 kubelet[2558]: E1108 00:29:36.847205 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:37.458349 kubelet[2558]: E1108 00:29:37.458303 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:38.247360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100135468.mount: Deactivated successfully. Nov 8 00:29:40.290868 containerd[1467]: time="2025-11-08T00:29:40.290789376Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:40.291912 containerd[1467]: time="2025-11-08T00:29:40.291822078Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 8 00:29:40.292952 containerd[1467]: time="2025-11-08T00:29:40.292858105Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:40.295316 containerd[1467]: time="2025-11-08T00:29:40.295174003Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.474978588s" Nov 8 00:29:40.295316 containerd[1467]: time="2025-11-08T00:29:40.295215447Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 8 00:29:40.301200 containerd[1467]: time="2025-11-08T00:29:40.301152093Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:29:40.314750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219072638.mount: Deactivated successfully. Nov 8 00:29:40.318866 containerd[1467]: time="2025-11-08T00:29:40.318829563Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\"" Nov 8 00:29:40.319431 containerd[1467]: time="2025-11-08T00:29:40.319405646Z" level=info msg="StartContainer for \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\"" Nov 8 00:29:40.363562 systemd[1]: Started cri-containerd-a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d.scope - libcontainer container a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d. Nov 8 00:29:40.395367 containerd[1467]: time="2025-11-08T00:29:40.395308679Z" level=info msg="StartContainer for \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\" returns successfully" Nov 8 00:29:40.407066 systemd[1]: cri-containerd-a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d.scope: Deactivated successfully. Nov 8 00:29:40.466992 kubelet[2558]: E1108 00:29:40.466836 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:40.954266 containerd[1467]: time="2025-11-08T00:29:40.951525304Z" level=info msg="shim disconnected" id=a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d namespace=k8s.io Nov 8 00:29:40.954266 containerd[1467]: time="2025-11-08T00:29:40.954252156Z" level=warning msg="cleaning up after shim disconnected" id=a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d namespace=k8s.io Nov 8 00:29:40.954266 containerd[1467]: time="2025-11-08T00:29:40.954268610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:41.311311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d-rootfs.mount: Deactivated successfully. Nov 8 00:29:41.467337 kubelet[2558]: E1108 00:29:41.467303 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:41.473638 containerd[1467]: time="2025-11-08T00:29:41.473574690Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:29:41.489337 containerd[1467]: time="2025-11-08T00:29:41.489288777Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\"" Nov 8 00:29:41.490020 containerd[1467]: time="2025-11-08T00:29:41.489950423Z" level=info msg="StartContainer for \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\"" Nov 8 00:29:41.493592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209704509.mount: Deactivated successfully. Nov 8 00:29:41.527534 systemd[1]: Started cri-containerd-15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6.scope - libcontainer container 15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6. Nov 8 00:29:41.553013 containerd[1467]: time="2025-11-08T00:29:41.552957941Z" level=info msg="StartContainer for \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\" returns successfully" Nov 8 00:29:41.565712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:29:41.565983 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:41.566062 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:41.573891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:41.574310 systemd[1]: cri-containerd-15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6.scope: Deactivated successfully. Nov 8 00:29:41.594516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:41.594928 containerd[1467]: time="2025-11-08T00:29:41.594781177Z" level=info msg="shim disconnected" id=15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6 namespace=k8s.io Nov 8 00:29:41.594928 containerd[1467]: time="2025-11-08T00:29:41.594841571Z" level=warning msg="cleaning up after shim disconnected" id=15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6 namespace=k8s.io Nov 8 00:29:41.594928 containerd[1467]: time="2025-11-08T00:29:41.594853596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:42.311751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6-rootfs.mount: Deactivated successfully. Nov 8 00:29:42.471149 kubelet[2558]: E1108 00:29:42.470954 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:42.475550 containerd[1467]: time="2025-11-08T00:29:42.475507872Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:29:42.496727 containerd[1467]: time="2025-11-08T00:29:42.496665534Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\"" Nov 8 00:29:42.497338 containerd[1467]: time="2025-11-08T00:29:42.497313037Z" level=info msg="StartContainer for \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\"" Nov 8 00:29:42.534570 systemd[1]: Started cri-containerd-5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a.scope - libcontainer container 5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a. Nov 8 00:29:42.566552 containerd[1467]: time="2025-11-08T00:29:42.566410934Z" level=info msg="StartContainer for \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\" returns successfully" Nov 8 00:29:42.569157 systemd[1]: cri-containerd-5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a.scope: Deactivated successfully. Nov 8 00:29:42.595105 containerd[1467]: time="2025-11-08T00:29:42.595029688Z" level=info msg="shim disconnected" id=5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a namespace=k8s.io Nov 8 00:29:42.595105 containerd[1467]: time="2025-11-08T00:29:42.595099811Z" level=warning msg="cleaning up after shim disconnected" id=5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a namespace=k8s.io Nov 8 00:29:42.595105 containerd[1467]: time="2025-11-08T00:29:42.595110543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:43.311218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a-rootfs.mount: Deactivated successfully. Nov 8 00:29:43.475500 kubelet[2558]: E1108 00:29:43.475462 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:43.488557 containerd[1467]: time="2025-11-08T00:29:43.488497574Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:29:43.507167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385056870.mount: Deactivated successfully. Nov 8 00:29:43.509627 containerd[1467]: time="2025-11-08T00:29:43.509568100Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\"" Nov 8 00:29:43.510251 containerd[1467]: time="2025-11-08T00:29:43.510194446Z" level=info msg="StartContainer for \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\"" Nov 8 00:29:43.545534 systemd[1]: Started cri-containerd-c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a.scope - libcontainer container c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a. Nov 8 00:29:43.571177 systemd[1]: cri-containerd-c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a.scope: Deactivated successfully. Nov 8 00:29:43.572774 containerd[1467]: time="2025-11-08T00:29:43.572733626Z" level=info msg="StartContainer for \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\" returns successfully" Nov 8 00:29:43.597835 containerd[1467]: time="2025-11-08T00:29:43.597747568Z" level=info msg="shim disconnected" id=c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a namespace=k8s.io Nov 8 00:29:43.597835 containerd[1467]: time="2025-11-08T00:29:43.597810907Z" level=warning msg="cleaning up after shim disconnected" id=c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a namespace=k8s.io Nov 8 00:29:43.597835 containerd[1467]: time="2025-11-08T00:29:43.597819254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:44.311289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a-rootfs.mount: Deactivated successfully. Nov 8 00:29:44.479083 kubelet[2558]: E1108 00:29:44.479045 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:44.484063 containerd[1467]: time="2025-11-08T00:29:44.484006193Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:29:44.500741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114063827.mount: Deactivated successfully. Nov 8 00:29:44.503407 containerd[1467]: time="2025-11-08T00:29:44.503355439Z" level=info msg="CreateContainer within sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\"" Nov 8 00:29:44.503918 containerd[1467]: time="2025-11-08T00:29:44.503887491Z" level=info msg="StartContainer for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\"" Nov 8 00:29:44.535528 systemd[1]: Started cri-containerd-8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975.scope - libcontainer container 8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975. Nov 8 00:29:44.565868 containerd[1467]: time="2025-11-08T00:29:44.565753663Z" level=info msg="StartContainer for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" returns successfully" Nov 8 00:29:44.703673 kubelet[2558]: I1108 00:29:44.703635 2558 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:29:44.750787 systemd[1]: Created slice kubepods-burstable-podd6b80213_37f3_4848_8bea_ace736038133.slice - libcontainer container kubepods-burstable-podd6b80213_37f3_4848_8bea_ace736038133.slice. Nov 8 00:29:44.762015 systemd[1]: Created slice kubepods-burstable-pod18eb110d_61fb_4b4a_a69b_4f83b31f0f4a.slice - libcontainer container kubepods-burstable-pod18eb110d_61fb_4b4a_a69b_4f83b31f0f4a.slice. Nov 8 00:29:44.897543 kubelet[2558]: I1108 00:29:44.897377 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6b80213-37f3-4848-8bea-ace736038133-config-volume\") pod \"coredns-674b8bbfcf-4d668\" (UID: \"d6b80213-37f3-4848-8bea-ace736038133\") " pod="kube-system/coredns-674b8bbfcf-4d668" Nov 8 00:29:44.897543 kubelet[2558]: I1108 00:29:44.897516 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18eb110d-61fb-4b4a-a69b-4f83b31f0f4a-config-volume\") pod \"coredns-674b8bbfcf-stt55\" (UID: \"18eb110d-61fb-4b4a-a69b-4f83b31f0f4a\") " pod="kube-system/coredns-674b8bbfcf-stt55" Nov 8 00:29:44.897711 kubelet[2558]: I1108 00:29:44.897579 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrr65\" (UniqueName: \"kubernetes.io/projected/d6b80213-37f3-4848-8bea-ace736038133-kube-api-access-rrr65\") pod \"coredns-674b8bbfcf-4d668\" (UID: \"d6b80213-37f3-4848-8bea-ace736038133\") " pod="kube-system/coredns-674b8bbfcf-4d668" Nov 8 00:29:44.897711 kubelet[2558]: I1108 00:29:44.897605 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pwb\" (UniqueName: \"kubernetes.io/projected/18eb110d-61fb-4b4a-a69b-4f83b31f0f4a-kube-api-access-b5pwb\") pod \"coredns-674b8bbfcf-stt55\" (UID: \"18eb110d-61fb-4b4a-a69b-4f83b31f0f4a\") " pod="kube-system/coredns-674b8bbfcf-stt55" Nov 8 00:29:45.057085 kubelet[2558]: E1108 00:29:45.057014 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:45.066989 kubelet[2558]: E1108 00:29:45.066945 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:45.067883 containerd[1467]: time="2025-11-08T00:29:45.067831745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-stt55,Uid:18eb110d-61fb-4b4a-a69b-4f83b31f0f4a,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:45.074383 containerd[1467]: time="2025-11-08T00:29:45.074330848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4d668,Uid:d6b80213-37f3-4848-8bea-ace736038133,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:45.501651 kubelet[2558]: E1108 00:29:45.501613 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:45.519323 kubelet[2558]: I1108 00:29:45.519249 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hpm8" podStartSLOduration=6.151203494 podStartE2EDuration="16.519228758s" podCreationTimestamp="2025-11-08 00:29:29 +0000 UTC" firstStartedPulling="2025-11-08 00:29:29.928055861 +0000 UTC m=+6.613218571" lastFinishedPulling="2025-11-08 00:29:40.296081125 +0000 UTC m=+16.981243835" observedRunningTime="2025-11-08 00:29:45.514837971 +0000 UTC m=+22.200000691" watchObservedRunningTime="2025-11-08 00:29:45.519228758 +0000 UTC m=+22.204391468" Nov 8 00:29:46.528338 kubelet[2558]: E1108 00:29:46.528292 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:46.780033 systemd-networkd[1394]: cilium_host: Link UP Nov 8 00:29:46.780293 systemd-networkd[1394]: cilium_net: Link UP Nov 8 00:29:46.780298 systemd-networkd[1394]: cilium_net: Gained carrier Nov 8 00:29:46.780632 systemd-networkd[1394]: cilium_host: Gained carrier Nov 8 00:29:46.812813 systemd-networkd[1394]: cilium_host: Gained IPv6LL Nov 8 00:29:46.897817 systemd-networkd[1394]: cilium_vxlan: Link UP Nov 8 00:29:46.897830 systemd-networkd[1394]: cilium_vxlan: Gained carrier Nov 8 00:29:47.110425 kernel: NET: Registered PF_ALG protocol family Nov 8 00:29:47.530216 kubelet[2558]: E1108 00:29:47.530178 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:47.732582 systemd-networkd[1394]: cilium_net: Gained IPv6LL Nov 8 00:29:47.819472 systemd-networkd[1394]: lxc_health: Link UP Nov 8 00:29:47.829964 systemd-networkd[1394]: lxc_health: Gained carrier Nov 8 00:29:48.117729 systemd-networkd[1394]: lxce00b237a8efa: Link UP Nov 8 00:29:48.127417 kernel: eth0: renamed from tmpb23ad Nov 8 00:29:48.137001 systemd-networkd[1394]: lxce00b237a8efa: Gained carrier Nov 8 00:29:48.156049 systemd-networkd[1394]: lxc130c26a0c4b6: Link UP Nov 8 00:29:48.167562 kernel: eth0: renamed from tmpf4760 Nov 8 00:29:48.175211 systemd-networkd[1394]: lxc130c26a0c4b6: Gained carrier Nov 8 00:29:48.195174 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:56174.service - OpenSSH per-connection server daemon (10.0.0.1:56174). Nov 8 00:29:48.239464 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 56174 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:29:48.241338 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:48.245723 systemd-logind[1438]: New session 10 of user core. Nov 8 00:29:48.250531 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:29:48.531975 kubelet[2558]: E1108 00:29:48.531933 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:48.607268 sshd[3780]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:48.611726 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:29:48.612179 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:56174.service: Deactivated successfully. Nov 8 00:29:48.614938 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:29:48.616120 systemd-logind[1438]: Removed session 10. Nov 8 00:29:48.693720 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Nov 8 00:29:49.140803 systemd-networkd[1394]: lxc_health: Gained IPv6LL Nov 8 00:29:49.460700 systemd-networkd[1394]: lxce00b237a8efa: Gained IPv6LL Nov 8 00:29:49.533955 kubelet[2558]: E1108 00:29:49.533919 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:50.036661 systemd-networkd[1394]: lxc130c26a0c4b6: Gained IPv6LL Nov 8 00:29:50.535242 kubelet[2558]: E1108 00:29:50.535188 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:51.697535 containerd[1467]: time="2025-11-08T00:29:51.697173784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:51.697535 containerd[1467]: time="2025-11-08T00:29:51.697279757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:51.697535 containerd[1467]: time="2025-11-08T00:29:51.697310248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:51.698465 containerd[1467]: time="2025-11-08T00:29:51.698329784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:51.709608 containerd[1467]: time="2025-11-08T00:29:51.709484400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:51.709764 containerd[1467]: time="2025-11-08T00:29:51.709639991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:51.709764 containerd[1467]: time="2025-11-08T00:29:51.709698549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:51.713156 containerd[1467]: time="2025-11-08T00:29:51.711635085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:51.730567 systemd[1]: Started cri-containerd-b23ada232bba9cd38474324733febf408797a9cdbcde49e6c5f6772e5b8abe2d.scope - libcontainer container b23ada232bba9cd38474324733febf408797a9cdbcde49e6c5f6772e5b8abe2d. Nov 8 00:29:51.737277 systemd[1]: Started cri-containerd-f47608d7416fe87a6e39f7fa46d87e048f463ecaaaf8c310d150b6e657d822fd.scope - libcontainer container f47608d7416fe87a6e39f7fa46d87e048f463ecaaaf8c310d150b6e657d822fd. Nov 8 00:29:51.750585 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:51.755106 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:51.783945 containerd[1467]: time="2025-11-08T00:29:51.783896366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-stt55,Uid:18eb110d-61fb-4b4a-a69b-4f83b31f0f4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b23ada232bba9cd38474324733febf408797a9cdbcde49e6c5f6772e5b8abe2d\"" Nov 8 00:29:51.785486 kubelet[2558]: E1108 00:29:51.785229 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:51.793507 containerd[1467]: time="2025-11-08T00:29:51.793441542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4d668,Uid:d6b80213-37f3-4848-8bea-ace736038133,Namespace:kube-system,Attempt:0,} returns sandbox id \"f47608d7416fe87a6e39f7fa46d87e048f463ecaaaf8c310d150b6e657d822fd\"" Nov 8 00:29:51.794523 kubelet[2558]: E1108 00:29:51.794498 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:51.878120 containerd[1467]: time="2025-11-08T00:29:51.878053960Z" level=info msg="CreateContainer within sandbox \"b23ada232bba9cd38474324733febf408797a9cdbcde49e6c5f6772e5b8abe2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:51.976876 containerd[1467]: time="2025-11-08T00:29:51.976663900Z" level=info msg="CreateContainer within sandbox \"f47608d7416fe87a6e39f7fa46d87e048f463ecaaaf8c310d150b6e657d822fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:52.043673 containerd[1467]: time="2025-11-08T00:29:52.043603597Z" level=info msg="CreateContainer within sandbox \"f47608d7416fe87a6e39f7fa46d87e048f463ecaaaf8c310d150b6e657d822fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0b6fac0f25aaba8bfcd2a31c5c2630775b70f6b9a16e954b7bf9fea50b9626a\"" Nov 8 00:29:52.044217 containerd[1467]: time="2025-11-08T00:29:52.044173929Z" level=info msg="StartContainer for \"d0b6fac0f25aaba8bfcd2a31c5c2630775b70f6b9a16e954b7bf9fea50b9626a\"" Nov 8 00:29:52.047346 containerd[1467]: time="2025-11-08T00:29:52.047303832Z" level=info msg="CreateContainer within sandbox \"b23ada232bba9cd38474324733febf408797a9cdbcde49e6c5f6772e5b8abe2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2184dc28aaf5560dbe5708972ce2a5fdd85aaf65a454c1c076e7559e7304033f\"" Nov 8 00:29:52.047870 containerd[1467]: time="2025-11-08T00:29:52.047809566Z" level=info msg="StartContainer for \"2184dc28aaf5560dbe5708972ce2a5fdd85aaf65a454c1c076e7559e7304033f\"" Nov 8 00:29:52.072603 systemd[1]: Started cri-containerd-d0b6fac0f25aaba8bfcd2a31c5c2630775b70f6b9a16e954b7bf9fea50b9626a.scope - libcontainer container d0b6fac0f25aaba8bfcd2a31c5c2630775b70f6b9a16e954b7bf9fea50b9626a. Nov 8 00:29:52.076068 systemd[1]: Started cri-containerd-2184dc28aaf5560dbe5708972ce2a5fdd85aaf65a454c1c076e7559e7304033f.scope - libcontainer container 2184dc28aaf5560dbe5708972ce2a5fdd85aaf65a454c1c076e7559e7304033f. Nov 8 00:29:52.304702 containerd[1467]: time="2025-11-08T00:29:52.304638370Z" level=info msg="StartContainer for \"d0b6fac0f25aaba8bfcd2a31c5c2630775b70f6b9a16e954b7bf9fea50b9626a\" returns successfully" Nov 8 00:29:52.304989 containerd[1467]: time="2025-11-08T00:29:52.304638651Z" level=info msg="StartContainer for \"2184dc28aaf5560dbe5708972ce2a5fdd85aaf65a454c1c076e7559e7304033f\" returns successfully" Nov 8 00:29:52.539570 kubelet[2558]: E1108 00:29:52.539438 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:52.540380 kubelet[2558]: E1108 00:29:52.540359 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:52.549575 kubelet[2558]: I1108 00:29:52.549474 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4d668" podStartSLOduration=23.549452746 podStartE2EDuration="23.549452746s" podCreationTimestamp="2025-11-08 00:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:52.547573511 +0000 UTC m=+29.232736241" watchObservedRunningTime="2025-11-08 00:29:52.549452746 +0000 UTC m=+29.234615457" Nov 8 00:29:52.569056 kubelet[2558]: I1108 00:29:52.568471 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-stt55" podStartSLOduration=23.568447062 podStartE2EDuration="23.568447062s" podCreationTimestamp="2025-11-08 00:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:52.566713428 +0000 UTC m=+29.251876158" watchObservedRunningTime="2025-11-08 00:29:52.568447062 +0000 UTC m=+29.253609782" Nov 8 00:29:53.542287 kubelet[2558]: E1108 00:29:53.542240 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:53.542769 kubelet[2558]: E1108 00:29:53.542317 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:53.625798 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:56188.service - OpenSSH per-connection server daemon (10.0.0.1:56188). Nov 8 00:29:53.670429 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 56188 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:29:53.672161 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:53.676483 systemd-logind[1438]: New session 11 of user core. Nov 8 00:29:53.692613 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:29:53.832938 sshd[3980]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:53.837983 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:56188.service: Deactivated successfully. Nov 8 00:29:53.841147 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:29:53.841949 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:29:53.842988 systemd-logind[1438]: Removed session 11. Nov 8 00:29:54.543928 kubelet[2558]: E1108 00:29:54.543839 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:54.543928 kubelet[2558]: E1108 00:29:54.543903 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:29:58.849948 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:53632.service - OpenSSH per-connection server daemon (10.0.0.1:53632). Nov 8 00:29:58.894235 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 53632 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:29:58.896264 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:58.901167 systemd-logind[1438]: New session 12 of user core. Nov 8 00:29:58.912569 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:29:59.037124 sshd[3995]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:59.042272 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:53632.service: Deactivated successfully. Nov 8 00:29:59.044577 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:29:59.045422 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:29:59.046543 systemd-logind[1438]: Removed session 12. Nov 8 00:30:04.051637 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:53648.service - OpenSSH per-connection server daemon (10.0.0.1:53648). Nov 8 00:30:04.094026 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 53648 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:04.095606 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:04.099715 systemd-logind[1438]: New session 13 of user core. Nov 8 00:30:04.106533 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:30:04.221102 sshd[4013]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:04.232318 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:53648.service: Deactivated successfully. Nov 8 00:30:04.234198 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:30:04.235822 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:30:04.245657 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:53656.service - OpenSSH per-connection server daemon (10.0.0.1:53656). Nov 8 00:30:04.246649 systemd-logind[1438]: Removed session 13. Nov 8 00:30:04.282137 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 53656 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:04.283643 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:04.288084 systemd-logind[1438]: New session 14 of user core. Nov 8 00:30:04.300538 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:30:04.470067 sshd[4028]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:04.481795 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:53656.service: Deactivated successfully. Nov 8 00:30:04.485840 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:30:04.490590 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:30:04.502109 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:53666.service - OpenSSH per-connection server daemon (10.0.0.1:53666). Nov 8 00:30:04.503543 systemd-logind[1438]: Removed session 14. Nov 8 00:30:04.540157 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:04.541693 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:04.545700 systemd-logind[1438]: New session 15 of user core. Nov 8 00:30:04.555542 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:30:04.671267 sshd[4040]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:04.675535 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:53666.service: Deactivated successfully. Nov 8 00:30:04.677846 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:30:04.678556 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:30:04.679606 systemd-logind[1438]: Removed session 15. Nov 8 00:30:09.686000 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:32972.service - OpenSSH per-connection server daemon (10.0.0.1:32972). Nov 8 00:30:09.732079 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 32972 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:09.734745 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:09.741035 systemd-logind[1438]: New session 16 of user core. Nov 8 00:30:09.751753 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:30:09.888315 sshd[4054]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:09.894648 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:32972.service: Deactivated successfully. Nov 8 00:30:09.898099 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:30:09.900265 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:30:09.901699 systemd-logind[1438]: Removed session 16. Nov 8 00:30:14.911572 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:32974.service - OpenSSH per-connection server daemon (10.0.0.1:32974). Nov 8 00:30:14.956137 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 32974 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:14.957911 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:14.962200 systemd-logind[1438]: New session 17 of user core. Nov 8 00:30:14.972527 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:30:15.099841 sshd[4069]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:15.110549 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:32974.service: Deactivated successfully. Nov 8 00:30:15.112610 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:30:15.114432 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:30:15.125661 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:32986.service - OpenSSH per-connection server daemon (10.0.0.1:32986). Nov 8 00:30:15.126555 systemd-logind[1438]: Removed session 17. Nov 8 00:30:15.165439 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 32986 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:15.166982 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:15.171130 systemd-logind[1438]: New session 18 of user core. Nov 8 00:30:15.180534 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:30:15.366791 sshd[4083]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:15.378577 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:32986.service: Deactivated successfully. Nov 8 00:30:15.380639 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:30:15.382373 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:30:15.399681 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:32998.service - OpenSSH per-connection server daemon (10.0.0.1:32998). Nov 8 00:30:15.400695 systemd-logind[1438]: Removed session 18. Nov 8 00:30:15.435124 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 32998 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:15.437019 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:15.441409 systemd-logind[1438]: New session 19 of user core. Nov 8 00:30:15.451530 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:30:15.932166 sshd[4095]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:15.940440 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:32998.service: Deactivated successfully. Nov 8 00:30:15.943819 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:30:15.951446 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:30:15.956846 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:33008.service - OpenSSH per-connection server daemon (10.0.0.1:33008). Nov 8 00:30:15.957904 systemd-logind[1438]: Removed session 19. Nov 8 00:30:15.997804 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 33008 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:15.999721 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:16.003998 systemd-logind[1438]: New session 20 of user core. Nov 8 00:30:16.018571 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:30:16.307653 sshd[4115]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:16.320787 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:33008.service: Deactivated successfully. Nov 8 00:30:16.323036 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:30:16.327149 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:30:16.335822 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:35980.service - OpenSSH per-connection server daemon (10.0.0.1:35980). Nov 8 00:30:16.337783 systemd-logind[1438]: Removed session 20. Nov 8 00:30:16.378236 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 35980 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:16.378891 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:16.385909 systemd-logind[1438]: New session 21 of user core. Nov 8 00:30:16.394536 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:30:16.514743 sshd[4127]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:16.519479 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:35980.service: Deactivated successfully. Nov 8 00:30:16.521756 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:30:16.522378 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:30:16.523304 systemd-logind[1438]: Removed session 21. Nov 8 00:30:21.528585 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:35984.service - OpenSSH per-connection server daemon (10.0.0.1:35984). Nov 8 00:30:21.571499 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 35984 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:21.573803 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:21.578269 systemd-logind[1438]: New session 22 of user core. Nov 8 00:30:21.587514 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:30:21.703267 sshd[4141]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:21.707403 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:35984.service: Deactivated successfully. Nov 8 00:30:21.709841 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:30:21.711575 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:30:21.712612 systemd-logind[1438]: Removed session 22. Nov 8 00:30:26.716198 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:44090.service - OpenSSH per-connection server daemon (10.0.0.1:44090). Nov 8 00:30:26.756679 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 44090 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:26.758322 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:26.762924 systemd-logind[1438]: New session 23 of user core. Nov 8 00:30:26.771673 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:30:26.884272 sshd[4159]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:26.889132 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:44090.service: Deactivated successfully. Nov 8 00:30:26.891514 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:30:26.892148 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:30:26.893266 systemd-logind[1438]: Removed session 23. Nov 8 00:30:31.898126 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:44094.service - OpenSSH per-connection server daemon (10.0.0.1:44094). Nov 8 00:30:31.939386 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 44094 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:31.941102 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:31.945996 systemd-logind[1438]: New session 24 of user core. Nov 8 00:30:31.955603 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:30:32.078371 sshd[4175]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:32.086326 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:44094.service: Deactivated successfully. Nov 8 00:30:32.088125 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:30:32.090019 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:30:32.094676 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:44108.service - OpenSSH per-connection server daemon (10.0.0.1:44108). Nov 8 00:30:32.095659 systemd-logind[1438]: Removed session 24. Nov 8 00:30:32.132176 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 44108 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:32.133732 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:32.138089 systemd-logind[1438]: New session 25 of user core. Nov 8 00:30:32.147552 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:30:33.495175 containerd[1467]: time="2025-11-08T00:30:33.495112565Z" level=info msg="StopContainer for \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\" with timeout 30 (s)" Nov 8 00:30:33.496275 containerd[1467]: time="2025-11-08T00:30:33.496255043Z" level=info msg="Stop container \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\" with signal terminated" Nov 8 00:30:33.536070 systemd[1]: cri-containerd-8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d.scope: Deactivated successfully. Nov 8 00:30:33.552047 containerd[1467]: time="2025-11-08T00:30:33.551971464Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:30:33.564261 containerd[1467]: time="2025-11-08T00:30:33.564208905Z" level=info msg="StopContainer for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" with timeout 2 (s)" Nov 8 00:30:33.564669 containerd[1467]: time="2025-11-08T00:30:33.564631010Z" level=info msg="Stop container \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" with signal terminated" Nov 8 00:30:33.564851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d-rootfs.mount: Deactivated successfully. Nov 8 00:30:33.570580 containerd[1467]: time="2025-11-08T00:30:33.570505496Z" level=info msg="shim disconnected" id=8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d namespace=k8s.io Nov 8 00:30:33.570636 containerd[1467]: time="2025-11-08T00:30:33.570578402Z" level=warning msg="cleaning up after shim disconnected" id=8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d namespace=k8s.io Nov 8 00:30:33.570636 containerd[1467]: time="2025-11-08T00:30:33.570592258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:33.574578 systemd-networkd[1394]: lxc_health: Link DOWN Nov 8 00:30:33.574588 systemd-networkd[1394]: lxc_health: Lost carrier Nov 8 00:30:33.591674 containerd[1467]: time="2025-11-08T00:30:33.591616580Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:30:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:30:33.595879 kubelet[2558]: E1108 00:30:33.595813 2558 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:30:33.597247 containerd[1467]: time="2025-11-08T00:30:33.597072305Z" level=info msg="StopContainer for \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\" returns successfully" Nov 8 00:30:33.597724 containerd[1467]: time="2025-11-08T00:30:33.597684666Z" level=info msg="StopPodSandbox for \"878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85\"" Nov 8 00:30:33.597777 containerd[1467]: time="2025-11-08T00:30:33.597738256Z" level=info msg="Container to stop \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:30:33.601751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85-shm.mount: Deactivated successfully. Nov 8 00:30:33.602738 systemd[1]: cri-containerd-8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975.scope: Deactivated successfully. Nov 8 00:30:33.603140 systemd[1]: cri-containerd-8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975.scope: Consumed 7.200s CPU time. Nov 8 00:30:33.629875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975-rootfs.mount: Deactivated successfully. Nov 8 00:30:33.632032 systemd[1]: cri-containerd-878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85.scope: Deactivated successfully. Nov 8 00:30:33.646208 containerd[1467]: time="2025-11-08T00:30:33.645945354Z" level=info msg="shim disconnected" id=8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975 namespace=k8s.io Nov 8 00:30:33.646208 containerd[1467]: time="2025-11-08T00:30:33.646013752Z" level=warning msg="cleaning up after shim disconnected" id=8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975 namespace=k8s.io Nov 8 00:30:33.646208 containerd[1467]: time="2025-11-08T00:30:33.646025824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:33.660305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85-rootfs.mount: Deactivated successfully. Nov 8 00:30:33.664873 containerd[1467]: time="2025-11-08T00:30:33.664749480Z" level=info msg="shim disconnected" id=878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85 namespace=k8s.io Nov 8 00:30:33.665128 containerd[1467]: time="2025-11-08T00:30:33.664845529Z" level=warning msg="cleaning up after shim disconnected" id=878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85 namespace=k8s.io Nov 8 00:30:33.665128 containerd[1467]: time="2025-11-08T00:30:33.665059638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:33.670700 containerd[1467]: time="2025-11-08T00:30:33.670629716Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:30:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:30:33.686419 containerd[1467]: time="2025-11-08T00:30:33.686355817Z" level=info msg="StopContainer for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" returns successfully" Nov 8 00:30:33.687038 containerd[1467]: time="2025-11-08T00:30:33.687007972Z" level=info msg="StopPodSandbox for \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\"" Nov 8 00:30:33.687099 containerd[1467]: time="2025-11-08T00:30:33.687060099Z" level=info msg="Container to stop \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:30:33.687099 containerd[1467]: time="2025-11-08T00:30:33.687078313Z" level=info msg="Container to stop \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:30:33.687099 containerd[1467]: time="2025-11-08T00:30:33.687091518Z" level=info msg="Container to stop \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:30:33.687192 containerd[1467]: time="2025-11-08T00:30:33.687108029Z" level=info msg="Container to stop \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:30:33.687192 containerd[1467]: time="2025-11-08T00:30:33.687120973Z" level=info msg="Container to stop \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:30:33.694572 systemd[1]: cri-containerd-f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6.scope: Deactivated successfully. Nov 8 00:30:33.700950 containerd[1467]: time="2025-11-08T00:30:33.700885770Z" level=info msg="TearDown network for sandbox \"878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85\" successfully" Nov 8 00:30:33.700950 containerd[1467]: time="2025-11-08T00:30:33.700935352Z" level=info msg="StopPodSandbox for \"878ffe00eb4fdfd799c117a5d844e40ed9089965178cc1d94f46b90c086d8b85\" returns successfully" Nov 8 00:30:33.727592 containerd[1467]: time="2025-11-08T00:30:33.727504224Z" level=info msg="shim disconnected" id=f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6 namespace=k8s.io Nov 8 00:30:33.727592 containerd[1467]: time="2025-11-08T00:30:33.727584524Z" level=warning msg="cleaning up after shim disconnected" id=f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6 namespace=k8s.io Nov 8 00:30:33.727592 containerd[1467]: time="2025-11-08T00:30:33.727595895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:33.745921 containerd[1467]: time="2025-11-08T00:30:33.745206117Z" level=info msg="TearDown network for sandbox \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" successfully" Nov 8 00:30:33.745921 containerd[1467]: time="2025-11-08T00:30:33.745241202Z" level=info msg="StopPodSandbox for \"f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6\" returns successfully" Nov 8 00:30:33.756562 kubelet[2558]: I1108 00:30:33.756524 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j85s\" (UniqueName: \"kubernetes.io/projected/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-kube-api-access-8j85s\") pod \"7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0\" (UID: \"7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0\") " Nov 8 00:30:33.756562 kubelet[2558]: I1108 00:30:33.756567 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-cilium-config-path\") pod \"7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0\" (UID: \"7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0\") " Nov 8 00:30:33.760573 kubelet[2558]: I1108 00:30:33.760535 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0" (UID: "7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:30:33.762298 kubelet[2558]: I1108 00:30:33.762264 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-kube-api-access-8j85s" (OuterVolumeSpecName: "kube-api-access-8j85s") pod "7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0" (UID: "7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0"). InnerVolumeSpecName "kube-api-access-8j85s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:30:33.857693 kubelet[2558]: I1108 00:30:33.857622 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-bpf-maps\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.857693 kubelet[2558]: I1108 00:30:33.857683 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-cgroup\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.857894 kubelet[2558]: I1108 00:30:33.857716 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hubble-tls\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.857894 kubelet[2558]: I1108 00:30:33.857741 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-kernel\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.857894 kubelet[2558]: I1108 00:30:33.857768 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-run\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.857894 kubelet[2558]: I1108 00:30:33.857765 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.857894 kubelet[2558]: I1108 00:30:33.857786 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cni-path\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.857894 kubelet[2558]: I1108 00:30:33.857850 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cni-path" (OuterVolumeSpecName: "cni-path") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858060 kubelet[2558]: I1108 00:30:33.857859 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-net\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858060 kubelet[2558]: I1108 00:30:33.857877 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-xtables-lock\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858060 kubelet[2558]: I1108 00:30:33.857888 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858060 kubelet[2558]: I1108 00:30:33.857902 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-config-path\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858060 kubelet[2558]: I1108 00:30:33.857915 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.857922 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-clustermesh-secrets\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.857936 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-etc-cni-netd\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.857953 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-lib-modules\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.857969 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hostproc\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.857987 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74fxf\" (UniqueName: \"kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-kube-api-access-74fxf\") pod \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\" (UID: \"4cc99aa1-5f3d-4a28-aa6f-c204c823ce46\") " Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.858039 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.858210 kubelet[2558]: I1108 00:30:33.858052 2558 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.858629 kubelet[2558]: I1108 00:30:33.858061 2558 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.858629 kubelet[2558]: I1108 00:30:33.858070 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.858629 kubelet[2558]: I1108 00:30:33.858080 2558 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.858629 kubelet[2558]: I1108 00:30:33.858088 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8j85s\" (UniqueName: \"kubernetes.io/projected/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0-kube-api-access-8j85s\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.858629 kubelet[2558]: I1108 00:30:33.857938 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858629 kubelet[2558]: I1108 00:30:33.857954 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858839 kubelet[2558]: I1108 00:30:33.857977 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858839 kubelet[2558]: I1108 00:30:33.857997 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858839 kubelet[2558]: I1108 00:30:33.858498 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.858839 kubelet[2558]: I1108 00:30:33.858553 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hostproc" (OuterVolumeSpecName: "hostproc") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:30:33.861654 kubelet[2558]: I1108 00:30:33.861611 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:30:33.861947 kubelet[2558]: I1108 00:30:33.861920 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:30:33.862272 kubelet[2558]: I1108 00:30:33.862238 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:30:33.863780 kubelet[2558]: I1108 00:30:33.863752 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-kube-api-access-74fxf" (OuterVolumeSpecName: "kube-api-access-74fxf") pod "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" (UID: "4cc99aa1-5f3d-4a28-aa6f-c204c823ce46"). InnerVolumeSpecName "kube-api-access-74fxf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:30:33.958366 kubelet[2558]: I1108 00:30:33.958298 2558 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958366 kubelet[2558]: I1108 00:30:33.958345 2558 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958366 kubelet[2558]: I1108 00:30:33.958358 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958366 kubelet[2558]: I1108 00:30:33.958369 2558 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958366 kubelet[2558]: I1108 00:30:33.958379 2558 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958667 kubelet[2558]: I1108 00:30:33.958415 2558 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958667 kubelet[2558]: I1108 00:30:33.958439 2558 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958667 kubelet[2558]: I1108 00:30:33.958451 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-74fxf\" (UniqueName: \"kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-kube-api-access-74fxf\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958667 kubelet[2558]: I1108 00:30:33.958463 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:33.958667 kubelet[2558]: I1108 00:30:33.958473 2558 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 8 00:30:34.535731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6-rootfs.mount: Deactivated successfully. Nov 8 00:30:34.535885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8746a21c33410089aea3176ce6c61f4a1d44f5408beb315eb1dc65c8af5dea6-shm.mount: Deactivated successfully. Nov 8 00:30:34.535991 systemd[1]: var-lib-kubelet-pods-4cc99aa1\x2d5f3d\x2d4a28\x2daa6f\x2dc204c823ce46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74fxf.mount: Deactivated successfully. Nov 8 00:30:34.536095 systemd[1]: var-lib-kubelet-pods-4cc99aa1\x2d5f3d\x2d4a28\x2daa6f\x2dc204c823ce46-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 8 00:30:34.536228 systemd[1]: var-lib-kubelet-pods-4cc99aa1\x2d5f3d\x2d4a28\x2daa6f\x2dc204c823ce46-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 8 00:30:34.536345 systemd[1]: var-lib-kubelet-pods-7bbbc2a2\x2dce9b\x2d4935\x2d908d\x2dfad37e9ad9e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8j85s.mount: Deactivated successfully. Nov 8 00:30:34.628218 kubelet[2558]: I1108 00:30:34.628191 2558 scope.go:117] "RemoveContainer" containerID="8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975" Nov 8 00:30:34.629731 containerd[1467]: time="2025-11-08T00:30:34.629693105Z" level=info msg="RemoveContainer for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\"" Nov 8 00:30:34.636368 containerd[1467]: time="2025-11-08T00:30:34.636240341Z" level=info msg="RemoveContainer for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" returns successfully" Nov 8 00:30:34.636287 systemd[1]: Removed slice kubepods-burstable-pod4cc99aa1_5f3d_4a28_aa6f_c204c823ce46.slice - libcontainer container kubepods-burstable-pod4cc99aa1_5f3d_4a28_aa6f_c204c823ce46.slice. Nov 8 00:30:34.636718 systemd[1]: kubepods-burstable-pod4cc99aa1_5f3d_4a28_aa6f_c204c823ce46.slice: Consumed 7.311s CPU time. Nov 8 00:30:34.636779 kubelet[2558]: I1108 00:30:34.636615 2558 scope.go:117] "RemoveContainer" containerID="c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a" Nov 8 00:30:34.637909 containerd[1467]: time="2025-11-08T00:30:34.637869490Z" level=info msg="RemoveContainer for \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\"" Nov 8 00:30:34.638927 systemd[1]: Removed slice kubepods-besteffort-pod7bbbc2a2_ce9b_4935_908d_fad37e9ad9e0.slice - libcontainer container kubepods-besteffort-pod7bbbc2a2_ce9b_4935_908d_fad37e9ad9e0.slice. Nov 8 00:30:34.643610 containerd[1467]: time="2025-11-08T00:30:34.643565567Z" level=info msg="RemoveContainer for \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\" returns successfully" Nov 8 00:30:34.643769 kubelet[2558]: I1108 00:30:34.643741 2558 scope.go:117] "RemoveContainer" containerID="5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a" Nov 8 00:30:34.644919 containerd[1467]: time="2025-11-08T00:30:34.644874419Z" level=info msg="RemoveContainer for \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\"" Nov 8 00:30:34.654331 containerd[1467]: time="2025-11-08T00:30:34.654271150Z" level=info msg="RemoveContainer for \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\" returns successfully" Nov 8 00:30:34.655273 kubelet[2558]: I1108 00:30:34.655248 2558 scope.go:117] "RemoveContainer" containerID="15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6" Nov 8 00:30:34.656638 containerd[1467]: time="2025-11-08T00:30:34.656604433Z" level=info msg="RemoveContainer for \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\"" Nov 8 00:30:34.660603 containerd[1467]: time="2025-11-08T00:30:34.660564001Z" level=info msg="RemoveContainer for \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\" returns successfully" Nov 8 00:30:34.660831 kubelet[2558]: I1108 00:30:34.660744 2558 scope.go:117] "RemoveContainer" containerID="a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d" Nov 8 00:30:34.662495 containerd[1467]: time="2025-11-08T00:30:34.662078666Z" level=info msg="RemoveContainer for \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\"" Nov 8 00:30:34.678675 containerd[1467]: time="2025-11-08T00:30:34.678616481Z" level=info msg="RemoveContainer for \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\" returns successfully" Nov 8 00:30:34.678948 kubelet[2558]: I1108 00:30:34.678863 2558 scope.go:117] "RemoveContainer" containerID="8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975" Nov 8 00:30:34.683204 containerd[1467]: time="2025-11-08T00:30:34.683154698Z" level=error msg="ContainerStatus for \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\": not found" Nov 8 00:30:34.683397 kubelet[2558]: E1108 00:30:34.683365 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\": not found" containerID="8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975" Nov 8 00:30:34.683494 kubelet[2558]: I1108 00:30:34.683435 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975"} err="failed to get container status \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\": rpc error: code = NotFound desc = an error occurred when try to find container \"8202a1c4b75bc3941757e7656c23a4b0034baf9ac7d31bae7e0145d47501d975\": not found" Nov 8 00:30:34.683494 kubelet[2558]: I1108 00:30:34.683489 2558 scope.go:117] "RemoveContainer" containerID="c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a" Nov 8 00:30:34.683724 containerd[1467]: time="2025-11-08T00:30:34.683691468Z" level=error msg="ContainerStatus for \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\": not found" Nov 8 00:30:34.683987 kubelet[2558]: E1108 00:30:34.683939 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\": not found" containerID="c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a" Nov 8 00:30:34.684042 kubelet[2558]: I1108 00:30:34.684004 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a"} err="failed to get container status \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4c2e2666ee9ec3f5690bc21e4c5621f4f613e5896707ab1d1f60ed3b913b57a\": not found" Nov 8 00:30:34.684082 kubelet[2558]: I1108 00:30:34.684047 2558 scope.go:117] "RemoveContainer" containerID="5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a" Nov 8 00:30:34.684331 containerd[1467]: time="2025-11-08T00:30:34.684301447Z" level=error msg="ContainerStatus for \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\": not found" Nov 8 00:30:34.684517 kubelet[2558]: E1108 00:30:34.684470 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\": not found" containerID="5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a" Nov 8 00:30:34.684517 kubelet[2558]: I1108 00:30:34.684506 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a"} err="failed to get container status \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f6c3499c5665f1c4e706f9605a5d19b45b13e1cb09ffb4ea1322ac71e85944a\": not found" Nov 8 00:30:34.684517 kubelet[2558]: I1108 00:30:34.684522 2558 scope.go:117] "RemoveContainer" containerID="15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6" Nov 8 00:30:34.684836 containerd[1467]: time="2025-11-08T00:30:34.684696023Z" level=error msg="ContainerStatus for \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\": not found" Nov 8 00:30:34.684891 kubelet[2558]: E1108 00:30:34.684832 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\": not found" containerID="15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6" Nov 8 00:30:34.684891 kubelet[2558]: I1108 00:30:34.684858 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6"} err="failed to get container status \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"15ba480f088db303910919f9ce29fd4531b9b082c0920ba9d41a130a21a3f5f6\": not found" Nov 8 00:30:34.684891 kubelet[2558]: I1108 00:30:34.684876 2558 scope.go:117] "RemoveContainer" containerID="a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d" Nov 8 00:30:34.685079 containerd[1467]: time="2025-11-08T00:30:34.685044022Z" level=error msg="ContainerStatus for \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\": not found" Nov 8 00:30:34.685158 kubelet[2558]: E1108 00:30:34.685136 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\": not found" containerID="a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d" Nov 8 00:30:34.685226 kubelet[2558]: I1108 00:30:34.685156 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d"} err="failed to get container status \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5c12fa8ec5c312a900d3bcf4c92337e114faf45c47721cdf09f3026dd8fb45d\": not found" Nov 8 00:30:34.685226 kubelet[2558]: I1108 00:30:34.685172 2558 scope.go:117] "RemoveContainer" containerID="8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d" Nov 8 00:30:34.686447 containerd[1467]: time="2025-11-08T00:30:34.686414219Z" level=info msg="RemoveContainer for \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\"" Nov 8 00:30:34.690094 containerd[1467]: time="2025-11-08T00:30:34.690047728Z" level=info msg="RemoveContainer for \"8f7d22d83518d2a3069413a1b1897bea7fb922619e363e4faff46a9314703b6d\" returns successfully" Nov 8 00:30:35.344802 kubelet[2558]: I1108 00:30:35.344735 2558 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T00:30:35Z","lastTransitionTime":"2025-11-08T00:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 8 00:30:35.421252 kubelet[2558]: I1108 00:30:35.421195 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc99aa1-5f3d-4a28-aa6f-c204c823ce46" path="/var/lib/kubelet/pods/4cc99aa1-5f3d-4a28-aa6f-c204c823ce46/volumes" Nov 8 00:30:35.422157 kubelet[2558]: I1108 00:30:35.422126 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0" path="/var/lib/kubelet/pods/7bbbc2a2-ce9b-4935-908d-fad37e9ad9e0/volumes" Nov 8 00:30:35.457442 sshd[4189]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:35.469649 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:44108.service: Deactivated successfully. Nov 8 00:30:35.472017 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:30:35.474040 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:30:35.483653 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Nov 8 00:30:35.484752 systemd-logind[1438]: Removed session 25. Nov 8 00:30:35.523781 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:35.525691 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:35.530678 systemd-logind[1438]: New session 26 of user core. Nov 8 00:30:35.540687 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:30:35.995370 sshd[4351]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:36.009308 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:44114.service: Deactivated successfully. Nov 8 00:30:36.012067 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:30:36.017040 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:30:36.023337 systemd[1]: Started sshd@26-10.0.0.140:22-10.0.0.1:57838.service - OpenSSH per-connection server daemon (10.0.0.1:57838). Nov 8 00:30:36.025585 systemd-logind[1438]: Removed session 26. Nov 8 00:30:36.037352 systemd[1]: Created slice kubepods-burstable-pod986ebff0_c0d9_4208_8c8e_db19165f6ba5.slice - libcontainer container kubepods-burstable-pod986ebff0_c0d9_4208_8c8e_db19165f6ba5.slice. Nov 8 00:30:36.063183 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 57838 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:36.065222 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:36.069305 systemd-logind[1438]: New session 27 of user core. Nov 8 00:30:36.085524 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:30:36.137626 sshd[4364]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:36.149701 systemd[1]: sshd@26-10.0.0.140:22-10.0.0.1:57838.service: Deactivated successfully. Nov 8 00:30:36.151842 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:30:36.154202 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:30:36.163720 systemd[1]: Started sshd@27-10.0.0.140:22-10.0.0.1:57848.service - OpenSSH per-connection server daemon (10.0.0.1:57848). Nov 8 00:30:36.164951 systemd-logind[1438]: Removed session 27. Nov 8 00:30:36.170681 kubelet[2558]: I1108 00:30:36.170649 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-bpf-maps\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.170981 kubelet[2558]: I1108 00:30:36.170700 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-host-proc-sys-net\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.170981 kubelet[2558]: I1108 00:30:36.170755 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-host-proc-sys-kernel\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.170981 kubelet[2558]: I1108 00:30:36.170778 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/986ebff0-c0d9-4208-8c8e-db19165f6ba5-hubble-tls\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.170981 kubelet[2558]: I1108 00:30:36.170802 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-etc-cni-netd\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.170981 kubelet[2558]: I1108 00:30:36.170846 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-lib-modules\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.170981 kubelet[2558]: I1108 00:30:36.170873 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/986ebff0-c0d9-4208-8c8e-db19165f6ba5-cilium-config-path\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171125 kubelet[2558]: I1108 00:30:36.170907 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/986ebff0-c0d9-4208-8c8e-db19165f6ba5-cilium-ipsec-secrets\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171125 kubelet[2558]: I1108 00:30:36.170927 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/986ebff0-c0d9-4208-8c8e-db19165f6ba5-clustermesh-secrets\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171125 kubelet[2558]: I1108 00:30:36.171022 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smx54\" (UniqueName: \"kubernetes.io/projected/986ebff0-c0d9-4208-8c8e-db19165f6ba5-kube-api-access-smx54\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171125 kubelet[2558]: I1108 00:30:36.171084 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-cilium-cgroup\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171125 kubelet[2558]: I1108 00:30:36.171110 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-cni-path\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171252 kubelet[2558]: I1108 00:30:36.171139 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-cilium-run\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171252 kubelet[2558]: I1108 00:30:36.171161 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-xtables-lock\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.171252 kubelet[2558]: I1108 00:30:36.171184 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/986ebff0-c0d9-4208-8c8e-db19165f6ba5-hostproc\") pod \"cilium-jl2rk\" (UID: \"986ebff0-c0d9-4208-8c8e-db19165f6ba5\") " pod="kube-system/cilium-jl2rk" Nov 8 00:30:36.201125 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 57848 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:30:36.203052 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:36.207884 systemd-logind[1438]: New session 28 of user core. Nov 8 00:30:36.218762 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 8 00:30:36.346120 kubelet[2558]: E1108 00:30:36.346059 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:36.347454 containerd[1467]: time="2025-11-08T00:30:36.347383102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jl2rk,Uid:986ebff0-c0d9-4208-8c8e-db19165f6ba5,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:36.380817 containerd[1467]: time="2025-11-08T00:30:36.379670822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:36.380817 containerd[1467]: time="2025-11-08T00:30:36.380794635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:36.380994 containerd[1467]: time="2025-11-08T00:30:36.380813499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:36.381097 containerd[1467]: time="2025-11-08T00:30:36.381054290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:36.403579 systemd[1]: Started cri-containerd-18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc.scope - libcontainer container 18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc. Nov 8 00:30:36.430886 containerd[1467]: time="2025-11-08T00:30:36.430836012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jl2rk,Uid:986ebff0-c0d9-4208-8c8e-db19165f6ba5,Namespace:kube-system,Attempt:0,} returns sandbox id \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\"" Nov 8 00:30:36.431733 kubelet[2558]: E1108 00:30:36.431712 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:36.437648 containerd[1467]: time="2025-11-08T00:30:36.437613307Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:30:36.449659 containerd[1467]: time="2025-11-08T00:30:36.449610838Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612\"" Nov 8 00:30:36.451140 containerd[1467]: time="2025-11-08T00:30:36.450053206Z" level=info msg="StartContainer for \"e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612\"" Nov 8 00:30:36.482544 systemd[1]: Started cri-containerd-e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612.scope - libcontainer container e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612. Nov 8 00:30:36.512941 containerd[1467]: time="2025-11-08T00:30:36.512900354Z" level=info msg="StartContainer for \"e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612\" returns successfully" Nov 8 00:30:36.524559 systemd[1]: cri-containerd-e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612.scope: Deactivated successfully. Nov 8 00:30:36.556646 containerd[1467]: time="2025-11-08T00:30:36.556577077Z" level=info msg="shim disconnected" id=e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612 namespace=k8s.io Nov 8 00:30:36.556646 containerd[1467]: time="2025-11-08T00:30:36.556635366Z" level=warning msg="cleaning up after shim disconnected" id=e269649be469da89669174e4c47ebd4ad6e637b5e1ffcf1da1704abd4606f612 namespace=k8s.io Nov 8 00:30:36.556646 containerd[1467]: time="2025-11-08T00:30:36.556644453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:36.640685 kubelet[2558]: E1108 00:30:36.640379 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:36.645748 containerd[1467]: time="2025-11-08T00:30:36.645715302Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:30:36.661519 containerd[1467]: time="2025-11-08T00:30:36.661480979Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b\"" Nov 8 00:30:36.662006 containerd[1467]: time="2025-11-08T00:30:36.661949456Z" level=info msg="StartContainer for \"269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b\"" Nov 8 00:30:36.690553 systemd[1]: Started cri-containerd-269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b.scope - libcontainer container 269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b. Nov 8 00:30:36.722182 containerd[1467]: time="2025-11-08T00:30:36.722129306Z" level=info msg="StartContainer for \"269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b\" returns successfully" Nov 8 00:30:36.731665 systemd[1]: cri-containerd-269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b.scope: Deactivated successfully. Nov 8 00:30:36.755714 containerd[1467]: time="2025-11-08T00:30:36.755626009Z" level=info msg="shim disconnected" id=269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b namespace=k8s.io Nov 8 00:30:36.755714 containerd[1467]: time="2025-11-08T00:30:36.755695538Z" level=warning msg="cleaning up after shim disconnected" id=269f2940ddf2359a19046c5c31bb0d86f05f932a7341b5d33c662f0a2837de6b namespace=k8s.io Nov 8 00:30:36.755714 containerd[1467]: time="2025-11-08T00:30:36.755704014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:37.644637 kubelet[2558]: E1108 00:30:37.644597 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:37.652713 containerd[1467]: time="2025-11-08T00:30:37.652653176Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:30:37.675144 containerd[1467]: time="2025-11-08T00:30:37.675089943Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3\"" Nov 8 00:30:37.676130 containerd[1467]: time="2025-11-08T00:30:37.676060841Z" level=info msg="StartContainer for \"b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3\"" Nov 8 00:30:37.703562 systemd[1]: Started cri-containerd-b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3.scope - libcontainer container b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3. Nov 8 00:30:37.737933 containerd[1467]: time="2025-11-08T00:30:37.737885587Z" level=info msg="StartContainer for \"b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3\" returns successfully" Nov 8 00:30:37.738086 systemd[1]: cri-containerd-b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3.scope: Deactivated successfully. Nov 8 00:30:37.763502 containerd[1467]: time="2025-11-08T00:30:37.763432327Z" level=info msg="shim disconnected" id=b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3 namespace=k8s.io Nov 8 00:30:37.763502 containerd[1467]: time="2025-11-08T00:30:37.763493783Z" level=warning msg="cleaning up after shim disconnected" id=b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3 namespace=k8s.io Nov 8 00:30:37.763502 containerd[1467]: time="2025-11-08T00:30:37.763502239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:38.278124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4fec67a5b38a8206d054b2f96f90db875cfb9e2c86a05555630df864e46ced3-rootfs.mount: Deactivated successfully. Nov 8 00:30:38.597557 kubelet[2558]: E1108 00:30:38.597386 2558 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:30:38.648014 kubelet[2558]: E1108 00:30:38.647964 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:38.655493 containerd[1467]: time="2025-11-08T00:30:38.655443515Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:30:38.673726 containerd[1467]: time="2025-11-08T00:30:38.673673293Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a\"" Nov 8 00:30:38.674516 containerd[1467]: time="2025-11-08T00:30:38.674447996Z" level=info msg="StartContainer for \"5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a\"" Nov 8 00:30:38.719252 systemd[1]: Started cri-containerd-5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a.scope - libcontainer container 5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a. Nov 8 00:30:38.752269 systemd[1]: cri-containerd-5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a.scope: Deactivated successfully. Nov 8 00:30:38.755485 containerd[1467]: time="2025-11-08T00:30:38.755444440Z" level=info msg="StartContainer for \"5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a\" returns successfully" Nov 8 00:30:38.780070 containerd[1467]: time="2025-11-08T00:30:38.779975512Z" level=info msg="shim disconnected" id=5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a namespace=k8s.io Nov 8 00:30:38.780070 containerd[1467]: time="2025-11-08T00:30:38.780055662Z" level=warning msg="cleaning up after shim disconnected" id=5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a namespace=k8s.io Nov 8 00:30:38.780070 containerd[1467]: time="2025-11-08T00:30:38.780068747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:39.278121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f7cf02149cc29571c0f630be8ad9d9209276694546754c5d399b3a13d0e353a-rootfs.mount: Deactivated successfully. Nov 8 00:30:39.651570 kubelet[2558]: E1108 00:30:39.651298 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:39.657328 containerd[1467]: time="2025-11-08T00:30:39.657243694Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:30:39.676510 containerd[1467]: time="2025-11-08T00:30:39.676452588Z" level=info msg="CreateContainer within sandbox \"18deeb0607b0c49fb7259dd9a0adb9ef19646d91db42ddf248030beac23c08cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"621976467137bb99c30c32ecf922a0401cb0bc97732b1b91c30bfc78ce67cfab\"" Nov 8 00:30:39.677137 containerd[1467]: time="2025-11-08T00:30:39.676989415Z" level=info msg="StartContainer for \"621976467137bb99c30c32ecf922a0401cb0bc97732b1b91c30bfc78ce67cfab\"" Nov 8 00:30:39.712994 systemd[1]: Started cri-containerd-621976467137bb99c30c32ecf922a0401cb0bc97732b1b91c30bfc78ce67cfab.scope - libcontainer container 621976467137bb99c30c32ecf922a0401cb0bc97732b1b91c30bfc78ce67cfab. Nov 8 00:30:39.750004 containerd[1467]: time="2025-11-08T00:30:39.749960489Z" level=info msg="StartContainer for \"621976467137bb99c30c32ecf922a0401cb0bc97732b1b91c30bfc78ce67cfab\" returns successfully" Nov 8 00:30:40.232474 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 8 00:30:40.656368 kubelet[2558]: E1108 00:30:40.656328 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:42.347955 kubelet[2558]: E1108 00:30:42.347883 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:42.417856 kubelet[2558]: E1108 00:30:42.417801 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:43.552472 systemd-networkd[1394]: lxc_health: Link UP Nov 8 00:30:43.560193 systemd-networkd[1394]: lxc_health: Gained carrier Nov 8 00:30:44.350521 kubelet[2558]: E1108 00:30:44.350473 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:44.373318 kubelet[2558]: I1108 00:30:44.372459 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jl2rk" podStartSLOduration=8.372431238 podStartE2EDuration="8.372431238s" podCreationTimestamp="2025-11-08 00:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:40.670534133 +0000 UTC m=+77.355696863" watchObservedRunningTime="2025-11-08 00:30:44.372431238 +0000 UTC m=+81.057593938" Nov 8 00:30:44.665953 kubelet[2558]: E1108 00:30:44.665548 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:45.524712 systemd-networkd[1394]: lxc_health: Gained IPv6LL Nov 8 00:30:45.669332 kubelet[2558]: E1108 00:30:45.669212 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:30:48.995954 sshd[4372]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:49.001095 systemd[1]: sshd@27-10.0.0.140:22-10.0.0.1:57848.service: Deactivated successfully. Nov 8 00:30:49.004058 systemd[1]: session-28.scope: Deactivated successfully. Nov 8 00:30:49.005086 systemd-logind[1438]: Session 28 logged out. Waiting for processes to exit. Nov 8 00:30:49.005949 systemd-logind[1438]: Removed session 28.