Oct 31 00:50:03.008215 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:50:03.008242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:50:03.008256 kernel: BIOS-provided physical RAM map: Oct 31 00:50:03.008264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 00:50:03.008273 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 00:50:03.008281 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 00:50:03.008291 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 31 00:50:03.008299 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 31 00:50:03.008307 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 00:50:03.008319 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 00:50:03.008327 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 00:50:03.008335 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 00:50:03.008343 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 31 00:50:03.008351 kernel: NX (Execute Disable) protection: active Oct 31 00:50:03.008361 kernel: APIC: Static calls initialized Oct 31 00:50:03.008389 kernel: SMBIOS 2.8 present. Oct 31 00:50:03.008398 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 31 00:50:03.008407 kernel: Hypervisor detected: KVM Oct 31 00:50:03.008416 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 00:50:03.008425 kernel: kvm-clock: using sched offset of 2551698005 cycles Oct 31 00:50:03.008435 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 00:50:03.008444 kernel: tsc: Detected 2794.748 MHz processor Oct 31 00:50:03.008453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:50:03.008463 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:50:03.008472 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 31 00:50:03.008486 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 31 00:50:03.008496 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:50:03.008505 kernel: Using GB pages for direct mapping Oct 31 00:50:03.008515 kernel: ACPI: Early table checksum verification disabled Oct 31 00:50:03.008525 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 31 00:50:03.008534 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008543 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008553 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008566 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 31 00:50:03.008575 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008584 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008593 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008602 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:50:03.008611 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 31 00:50:03.008621 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 31 00:50:03.008635 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 31 00:50:03.008647 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 31 00:50:03.008656 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 31 00:50:03.008666 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 31 00:50:03.008675 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 31 00:50:03.008682 kernel: No NUMA configuration found Oct 31 00:50:03.008690 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 31 00:50:03.008697 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 31 00:50:03.008707 kernel: Zone ranges: Oct 31 00:50:03.008715 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:50:03.008722 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 31 00:50:03.008731 kernel: Normal empty Oct 31 00:50:03.008740 kernel: Movable zone start for each node Oct 31 00:50:03.008750 kernel: Early memory node ranges Oct 31 00:50:03.008760 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 00:50:03.008770 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 31 00:50:03.008779 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 31 00:50:03.008808 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:50:03.008819 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 00:50:03.008829 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 31 00:50:03.008838 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 00:50:03.008848 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 00:50:03.008859 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:50:03.008868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 00:50:03.008879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 00:50:03.008888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 00:50:03.008903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 00:50:03.008913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 00:50:03.008923 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:50:03.008932 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 00:50:03.008942 kernel: TSC deadline timer available Oct 31 00:50:03.008952 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 00:50:03.008962 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 00:50:03.008972 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 00:50:03.008982 kernel: kvm-guest: setup PV sched yield Oct 31 00:50:03.008996 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 00:50:03.009005 kernel: Booting paravirtualized kernel on KVM Oct 31 00:50:03.009015 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:50:03.009025 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 31 00:50:03.009035 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Oct 31 00:50:03.009045 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Oct 31 00:50:03.009055 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 00:50:03.009064 kernel: kvm-guest: PV spinlocks enabled Oct 31 00:50:03.009074 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 00:50:03.009088 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:50:03.009097 kernel: random: crng init done Oct 31 00:50:03.009107 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:50:03.009117 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:50:03.009138 kernel: Fallback order for Node 0: 0 Oct 31 00:50:03.009147 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 31 00:50:03.009157 kernel: Policy zone: DMA32 Oct 31 00:50:03.009167 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:50:03.009177 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Oct 31 00:50:03.009191 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:50:03.009201 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:50:03.009211 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:50:03.009220 kernel: Dynamic Preempt: voluntary Oct 31 00:50:03.009230 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:50:03.009241 kernel: rcu: RCU event tracing is enabled. Oct 31 00:50:03.009251 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:50:03.009261 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:50:03.009271 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:50:03.009284 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:50:03.009294 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:50:03.009303 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:50:03.009313 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 00:50:03.009323 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 00:50:03.009331 kernel: Console: colour VGA+ 80x25 Oct 31 00:50:03.009339 kernel: printk: console [ttyS0] enabled Oct 31 00:50:03.009346 kernel: ACPI: Core revision 20230628 Oct 31 00:50:03.009353 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 00:50:03.009392 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:50:03.009400 kernel: x2apic enabled Oct 31 00:50:03.009407 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:50:03.009414 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 31 00:50:03.009421 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 31 00:50:03.009429 kernel: kvm-guest: setup PV IPIs Oct 31 00:50:03.009436 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:50:03.009454 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 00:50:03.009461 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 00:50:03.009469 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 00:50:03.009476 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 00:50:03.009484 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 00:50:03.009493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:50:03.009501 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 00:50:03.009508 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 00:50:03.009516 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 00:50:03.009526 kernel: active return thunk: retbleed_return_thunk Oct 31 00:50:03.009534 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 00:50:03.009541 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:50:03.009549 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:50:03.009556 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 31 00:50:03.009564 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 31 00:50:03.009572 kernel: active return thunk: srso_return_thunk Oct 31 00:50:03.009580 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 31 00:50:03.009587 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:50:03.009597 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:50:03.009605 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:50:03.009612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:50:03.009620 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:50:03.009627 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:50:03.009635 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:50:03.009642 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:50:03.009650 kernel: landlock: Up and running. Oct 31 00:50:03.009657 kernel: SELinux: Initializing. Oct 31 00:50:03.009667 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:50:03.009675 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:50:03.009682 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 00:50:03.009690 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:50:03.009697 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:50:03.009705 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:50:03.009712 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 00:50:03.009720 kernel: ... version: 0 Oct 31 00:50:03.009730 kernel: ... bit width: 48 Oct 31 00:50:03.009737 kernel: ... generic registers: 6 Oct 31 00:50:03.009745 kernel: ... value mask: 0000ffffffffffff Oct 31 00:50:03.009752 kernel: ... max period: 00007fffffffffff Oct 31 00:50:03.009759 kernel: ... fixed-purpose events: 0 Oct 31 00:50:03.009767 kernel: ... event mask: 000000000000003f Oct 31 00:50:03.009774 kernel: signal: max sigframe size: 1776 Oct 31 00:50:03.009782 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:50:03.009789 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:50:03.009797 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:50:03.009807 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:50:03.009814 kernel: .... node #0, CPUs: #1 #2 #3 Oct 31 00:50:03.009821 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:50:03.009829 kernel: smpboot: Max logical packages: 1 Oct 31 00:50:03.009836 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 00:50:03.009844 kernel: devtmpfs: initialized Oct 31 00:50:03.009851 kernel: x86/mm: Memory block size: 128MB Oct 31 00:50:03.009859 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:50:03.009866 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:50:03.009876 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:50:03.009883 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:50:03.009891 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:50:03.009898 kernel: audit: type=2000 audit(1761871801.747:1): state=initialized audit_enabled=0 res=1 Oct 31 00:50:03.009906 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:50:03.009913 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:50:03.009921 kernel: cpuidle: using governor menu Oct 31 00:50:03.009928 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:50:03.009936 kernel: dca service started, version 1.12.1 Oct 31 00:50:03.009946 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 00:50:03.009953 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 00:50:03.009961 kernel: PCI: Using configuration type 1 for base access Oct 31 00:50:03.009968 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:50:03.009976 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:50:03.009983 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:50:03.009991 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:50:03.009998 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:50:03.010006 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:50:03.010015 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:50:03.010023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:50:03.010030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:50:03.010038 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:50:03.010045 kernel: ACPI: Interpreter enabled Oct 31 00:50:03.010052 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 00:50:03.010060 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:50:03.010067 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:50:03.010075 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:50:03.010086 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 00:50:03.010093 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:50:03.010311 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:50:03.010458 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 00:50:03.010581 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 00:50:03.010591 kernel: PCI host bridge to bus 0000:00 Oct 31 00:50:03.010713 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:50:03.010831 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 00:50:03.010943 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:50:03.011062 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 00:50:03.011198 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:50:03.011311 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 31 00:50:03.011449 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:50:03.011587 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 00:50:03.011724 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 00:50:03.011847 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 31 00:50:03.011968 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 31 00:50:03.012088 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 31 00:50:03.012234 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:50:03.012383 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:50:03.012517 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 00:50:03.012638 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 31 00:50:03.012759 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 31 00:50:03.012888 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 00:50:03.013010 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 00:50:03.013149 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 31 00:50:03.013276 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 31 00:50:03.013432 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 00:50:03.013556 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 31 00:50:03.013675 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 31 00:50:03.013797 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 31 00:50:03.013917 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 31 00:50:03.014045 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 00:50:03.014186 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 00:50:03.014324 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 00:50:03.014462 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 31 00:50:03.014585 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 31 00:50:03.014713 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 00:50:03.014834 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 00:50:03.014845 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 00:50:03.014857 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 00:50:03.014865 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:50:03.014873 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 00:50:03.014880 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 00:50:03.014888 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 00:50:03.014895 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 00:50:03.014903 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 00:50:03.014910 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 00:50:03.014918 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 00:50:03.014928 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 00:50:03.014935 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 00:50:03.014943 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 00:50:03.014950 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 00:50:03.014958 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 00:50:03.014965 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 00:50:03.014973 kernel: iommu: Default domain type: Translated Oct 31 00:50:03.014980 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:50:03.014988 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:50:03.014995 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:50:03.015005 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 00:50:03.015013 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 31 00:50:03.015156 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 00:50:03.015302 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 00:50:03.015473 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:50:03.015484 kernel: vgaarb: loaded Oct 31 00:50:03.015492 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 00:50:03.015500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 00:50:03.015512 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 00:50:03.015519 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:50:03.015527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:50:03.015534 kernel: pnp: PnP ACPI init Oct 31 00:50:03.015663 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 00:50:03.015674 kernel: pnp: PnP ACPI: found 6 devices Oct 31 00:50:03.015682 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:50:03.015690 kernel: NET: Registered PF_INET protocol family Oct 31 00:50:03.015702 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:50:03.015710 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:50:03.015718 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:50:03.015726 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:50:03.015734 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 00:50:03.015741 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:50:03.015749 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:50:03.015757 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:50:03.015765 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:50:03.015775 kernel: NET: Registered PF_XDP protocol family Oct 31 00:50:03.015887 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 00:50:03.015995 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 00:50:03.016104 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 00:50:03.016235 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 00:50:03.016345 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 00:50:03.016468 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 31 00:50:03.016478 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:50:03.016490 kernel: Initialise system trusted keyrings Oct 31 00:50:03.016498 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:50:03.016506 kernel: Key type asymmetric registered Oct 31 00:50:03.016514 kernel: Asymmetric key parser 'x509' registered Oct 31 00:50:03.016522 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:50:03.016529 kernel: io scheduler mq-deadline registered Oct 31 00:50:03.016537 kernel: io scheduler kyber registered Oct 31 00:50:03.016545 kernel: io scheduler bfq registered Oct 31 00:50:03.016553 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:50:03.016563 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 00:50:03.016571 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 00:50:03.016579 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 00:50:03.016587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:50:03.016595 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:50:03.016603 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 00:50:03.016611 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:50:03.016618 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:50:03.016626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:50:03.016753 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 00:50:03.016869 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 00:50:03.016984 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T00:50:02 UTC (1761871802) Oct 31 00:50:03.017097 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 00:50:03.017110 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 31 00:50:03.017121 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:50:03.017141 kernel: Segment Routing with IPv6 Oct 31 00:50:03.017148 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:50:03.017160 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:50:03.017167 kernel: Key type dns_resolver registered Oct 31 00:50:03.017175 kernel: IPI shorthand broadcast: enabled Oct 31 00:50:03.017183 kernel: sched_clock: Marking stable (701003709, 232262668)->(1067367745, -134101368) Oct 31 00:50:03.017190 kernel: registered taskstats version 1 Oct 31 00:50:03.017198 kernel: Loading compiled-in X.509 certificates Oct 31 00:50:03.017206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:50:03.017213 kernel: Key type .fscrypt registered Oct 31 00:50:03.017221 kernel: Key type fscrypt-provisioning registered Oct 31 00:50:03.017230 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:50:03.017238 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:50:03.017246 kernel: ima: No architecture policies found Oct 31 00:50:03.017253 kernel: clk: Disabling unused clocks Oct 31 00:50:03.017261 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:50:03.017268 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:50:03.017276 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:50:03.017284 kernel: Run /init as init process Oct 31 00:50:03.017292 kernel: with arguments: Oct 31 00:50:03.017301 kernel: /init Oct 31 00:50:03.017309 kernel: with environment: Oct 31 00:50:03.017316 kernel: HOME=/ Oct 31 00:50:03.017324 kernel: TERM=linux Oct 31 00:50:03.017334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:50:03.017344 systemd[1]: Detected virtualization kvm. Oct 31 00:50:03.017353 systemd[1]: Detected architecture x86-64. Oct 31 00:50:03.017361 systemd[1]: Running in initrd. Oct 31 00:50:03.017421 systemd[1]: No hostname configured, using default hostname. Oct 31 00:50:03.017429 systemd[1]: Hostname set to . Oct 31 00:50:03.017438 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:50:03.017446 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:50:03.017454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:50:03.017462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:50:03.017471 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:50:03.017479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:50:03.017490 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:50:03.017511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:50:03.017524 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:50:03.017532 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:50:03.017541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:50:03.017551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:50:03.017560 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:50:03.017568 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:50:03.017576 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:50:03.017584 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:50:03.017612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:50:03.017621 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:50:03.017638 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:50:03.017649 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:50:03.017657 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:50:03.017674 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:50:03.017690 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:50:03.017699 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:50:03.017707 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:50:03.017715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:50:03.017724 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:50:03.017732 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:50:03.017743 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:50:03.017752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:50:03.017760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:50:03.017768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:50:03.017776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:50:03.017785 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:50:03.017796 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:50:03.017827 systemd-journald[193]: Collecting audit messages is disabled. Oct 31 00:50:03.017851 systemd-journald[193]: Journal started Oct 31 00:50:03.017869 systemd-journald[193]: Runtime Journal (/run/log/journal/34db216594684526a4f8d864fc28f1f5) is 6.0M, max 48.4M, 42.3M free. Oct 31 00:50:02.998161 systemd-modules-load[194]: Inserted module 'overlay' Oct 31 00:50:03.085120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:50:03.085158 kernel: Bridge firewalling registered Oct 31 00:50:03.026111 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 31 00:50:03.089822 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:50:03.090270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:50:03.092468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:50:03.096139 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:50:03.127610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:50:03.128508 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:50:03.129421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:50:03.131490 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:50:03.150545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:50:03.152382 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:50:03.153143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:50:03.156476 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:50:03.174861 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:50:03.192168 systemd-resolved[223]: Positive Trust Anchors: Oct 31 00:50:03.192186 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:50:03.192225 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:50:03.194749 systemd-resolved[223]: Defaulting to hostname 'linux'. Oct 31 00:50:03.212512 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:50:03.215573 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:50:03.219085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:50:03.265294 dracut-cmdline[231]: dracut-dracut-053 Oct 31 00:50:03.268901 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:50:03.363399 kernel: SCSI subsystem initialized Oct 31 00:50:03.373402 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:50:03.384412 kernel: iscsi: registered transport (tcp) Oct 31 00:50:03.406408 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:50:03.406474 kernel: QLogic iSCSI HBA Driver Oct 31 00:50:03.458956 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:50:03.472497 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:50:03.498212 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:50:03.498287 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:50:03.499889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:50:03.541393 kernel: raid6: avx2x4 gen() 29499 MB/s Oct 31 00:50:03.558393 kernel: raid6: avx2x2 gen() 30908 MB/s Oct 31 00:50:03.576183 kernel: raid6: avx2x1 gen() 25781 MB/s Oct 31 00:50:03.576206 kernel: raid6: using algorithm avx2x2 gen() 30908 MB/s Oct 31 00:50:03.608811 kernel: raid6: .... xor() 19751 MB/s, rmw enabled Oct 31 00:50:03.608832 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:50:03.630402 kernel: xor: automatically using best checksumming function avx Oct 31 00:50:03.786439 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:50:03.800922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:50:03.816534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:50:03.828330 systemd-udevd[413]: Using default interface naming scheme 'v255'. Oct 31 00:50:03.833082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:50:03.845572 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:50:03.862848 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Oct 31 00:50:03.901514 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:50:03.911539 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:50:03.977088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:50:03.987547 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:50:04.004878 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:50:04.009703 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 31 00:50:04.010145 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:50:04.024780 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:50:04.025167 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:50:04.025186 kernel: GPT:9289727 != 19775487 Oct 31 00:50:04.025196 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:50:04.025206 kernel: GPT:9289727 != 19775487 Oct 31 00:50:04.025216 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:50:04.025226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:50:04.017160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:50:04.027330 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:50:04.035430 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:50:04.040408 kernel: libata version 3.00 loaded. Oct 31 00:50:04.040577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:50:04.051551 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:50:04.054910 kernel: AES CTR mode by8 optimization enabled Oct 31 00:50:04.057982 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 00:50:04.058193 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 00:50:04.061985 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:50:04.065248 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 00:50:04.065448 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 00:50:04.068779 kernel: scsi host0: ahci Oct 31 00:50:04.075394 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Oct 31 00:50:04.075426 kernel: scsi host1: ahci Oct 31 00:50:04.075454 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Oct 31 00:50:04.077393 kernel: scsi host2: ahci Oct 31 00:50:04.078420 kernel: scsi host3: ahci Oct 31 00:50:04.079389 kernel: scsi host4: ahci Oct 31 00:50:04.083666 kernel: scsi host5: ahci Oct 31 00:50:04.083844 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 31 00:50:04.083856 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 31 00:50:04.086319 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 31 00:50:04.086341 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 31 00:50:04.089077 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 00:50:04.140324 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 31 00:50:04.140344 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 31 00:50:04.151410 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 00:50:04.159646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 00:50:04.163943 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 00:50:04.174168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:50:04.187546 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:50:04.191512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:50:04.191572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:50:04.197243 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:50:04.201244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:50:04.201308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:50:04.206476 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:50:04.210725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:50:04.289914 disk-uuid[563]: Primary Header is updated. Oct 31 00:50:04.289914 disk-uuid[563]: Secondary Entries is updated. Oct 31 00:50:04.289914 disk-uuid[563]: Secondary Header is updated. Oct 31 00:50:04.314420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:50:04.314442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:50:04.314458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:50:04.304981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:50:04.320500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:50:04.345055 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:50:04.451445 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 00:50:04.451503 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 00:50:04.453403 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 00:50:04.453425 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 00:50:04.454390 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 00:50:04.457684 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 00:50:04.457706 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 00:50:04.457716 kernel: ata3.00: applying bridge limits Oct 31 00:50:04.459434 kernel: ata3.00: configured for UDMA/100 Oct 31 00:50:04.462406 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 00:50:04.505830 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 00:50:04.506106 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:50:04.520392 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:50:05.378394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:50:05.379118 disk-uuid[567]: The operation has completed successfully. Oct 31 00:50:05.407289 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:50:05.407458 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:50:05.430602 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:50:05.435038 sh[595]: Success Oct 31 00:50:05.446392 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 00:50:05.478394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:50:05.498581 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:50:05.502006 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:50:05.514474 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:50:05.514503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:50:05.514515 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:50:05.516160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:50:05.517454 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:50:05.522817 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:50:05.526292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 00:50:05.536524 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:50:05.538755 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:50:05.549689 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:50:05.549719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:50:05.549731 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:50:05.553389 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:50:05.563241 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:50:05.566033 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:50:05.649111 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:50:05.666489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:50:05.686924 systemd-networkd[773]: lo: Link UP Oct 31 00:50:05.686934 systemd-networkd[773]: lo: Gained carrier Oct 31 00:50:05.688477 systemd-networkd[773]: Enumeration completed Oct 31 00:50:05.688551 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:50:05.688857 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:50:05.688861 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:50:05.689843 systemd-networkd[773]: eth0: Link UP Oct 31 00:50:05.689847 systemd-networkd[773]: eth0: Gained carrier Oct 31 00:50:05.689854 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:50:05.691860 systemd[1]: Reached target network.target - Network. Oct 31 00:50:05.711415 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:50:05.786557 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:50:05.807649 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:50:05.865401 ignition[778]: Ignition 2.19.0 Oct 31 00:50:05.865419 ignition[778]: Stage: fetch-offline Oct 31 00:50:05.865468 ignition[778]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:50:05.865481 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:50:05.865609 ignition[778]: parsed url from cmdline: "" Oct 31 00:50:05.865614 ignition[778]: no config URL provided Oct 31 00:50:05.865621 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:50:05.865635 ignition[778]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:50:05.865671 ignition[778]: op(1): [started] loading QEMU firmware config module Oct 31 00:50:05.865679 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:50:05.873552 ignition[778]: op(1): [finished] loading QEMU firmware config module Oct 31 00:50:05.970022 ignition[778]: parsing config with SHA512: e6f7b15b4caa69c0fef9e37886bb15f8bb133b644ceedcf4c52724b0596bad9cf6f53258a596b8e8a57fe32543732c65271b7289eae77184227438c01447a08d Oct 31 00:50:05.974230 unknown[778]: fetched base config from "system" Oct 31 00:50:05.974245 unknown[778]: fetched user config from "qemu" Oct 31 00:50:05.974818 ignition[778]: fetch-offline: fetch-offline passed Oct 31 00:50:05.977680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:50:05.974918 ignition[778]: Ignition finished successfully Oct 31 00:50:05.981225 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:50:05.989604 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:50:06.005253 ignition[790]: Ignition 2.19.0 Oct 31 00:50:06.005268 ignition[790]: Stage: kargs Oct 31 00:50:06.005511 ignition[790]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:50:06.005528 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:50:06.010699 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:50:06.006680 ignition[790]: kargs: kargs passed Oct 31 00:50:06.006736 ignition[790]: Ignition finished successfully Oct 31 00:50:06.020619 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:50:06.034077 ignition[799]: Ignition 2.19.0 Oct 31 00:50:06.034092 ignition[799]: Stage: disks Oct 31 00:50:06.034297 ignition[799]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:50:06.034313 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:50:06.035460 ignition[799]: disks: disks passed Oct 31 00:50:06.035511 ignition[799]: Ignition finished successfully Oct 31 00:50:06.042996 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:50:06.047533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:50:06.051655 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:50:06.056260 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:50:06.060015 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:50:06.063818 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:50:06.078543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:50:06.092519 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 31 00:50:06.399352 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:50:06.415509 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:50:06.527424 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:50:06.528490 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:50:06.530550 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:50:06.541472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:50:06.545662 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:50:06.551935 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Oct 31 00:50:06.549017 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:50:06.588909 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:50:06.588933 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:50:06.588945 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:50:06.588955 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:50:06.549084 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:50:06.549120 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:50:06.555746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:50:06.590244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:50:06.595980 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:50:06.635766 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:50:06.640053 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:50:06.643943 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:50:06.647693 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:50:06.736270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:50:06.749488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:50:06.750357 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:50:06.764931 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:50:06.761573 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:50:06.781887 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:50:06.797511 ignition[934]: INFO : Ignition 2.19.0 Oct 31 00:50:06.797511 ignition[934]: INFO : Stage: mount Oct 31 00:50:06.801727 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:50:06.801727 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:50:06.801727 ignition[934]: INFO : mount: mount passed Oct 31 00:50:06.801727 ignition[934]: INFO : Ignition finished successfully Oct 31 00:50:06.801280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:50:06.811489 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:50:07.386597 systemd-networkd[773]: eth0: Gained IPv6LL Oct 31 00:50:07.545560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:50:07.555182 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Oct 31 00:50:07.555215 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:50:07.555226 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:50:07.557897 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:50:07.561386 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:50:07.562466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:50:07.581395 ignition[961]: INFO : Ignition 2.19.0 Oct 31 00:50:07.581395 ignition[961]: INFO : Stage: files Oct 31 00:50:07.584181 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:50:07.584181 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:50:07.584181 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:50:07.584181 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:50:07.584181 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:50:07.594313 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:50:07.594313 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:50:07.594313 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:50:07.594313 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 00:50:07.594313 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 00:50:07.594313 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 00:50:07.594313 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 31 00:50:07.585872 unknown[961]: wrote ssh authorized keys file for user: core Oct 31 00:50:07.630684 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 00:50:07.697322 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 00:50:07.697322 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 31 00:50:07.703327 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 31 00:50:07.782475 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 31 00:50:07.884730 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 31 00:50:07.884730 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:50:07.891256 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:50:07.894329 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:50:07.897592 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:50:07.900740 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:50:07.904078 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:50:07.907115 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:50:07.909948 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:50:07.912999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:50:07.916104 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:50:07.919066 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:50:07.923399 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:50:07.927620 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:50:07.930976 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 31 00:50:08.148534 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 31 00:50:08.674921 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:50:08.674921 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 31 00:50:08.681089 ignition[961]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:50:08.728383 ignition[961]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:50:08.734174 ignition[961]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:50:08.736917 ignition[961]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:50:08.736917 ignition[961]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:50:08.736917 ignition[961]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:50:08.736917 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:50:08.736917 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:50:08.736917 ignition[961]: INFO : files: files passed Oct 31 00:50:08.736917 ignition[961]: INFO : Ignition finished successfully Oct 31 00:50:08.754541 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:50:08.766531 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:50:08.770476 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:50:08.770878 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:50:08.771015 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:50:08.789655 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 00:50:08.794871 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:50:08.794871 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:50:08.799904 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:50:08.804859 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:50:08.807138 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:50:08.817516 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:50:08.843897 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:50:08.844072 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:50:08.847814 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:50:08.851306 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:50:08.854660 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:50:08.868567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:50:08.882515 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:50:08.888909 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:50:08.904780 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:50:08.908690 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:50:08.910814 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:50:08.914093 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:50:08.914249 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:50:08.917900 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:50:08.920783 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:50:08.924267 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:50:08.927650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:50:08.930994 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:50:08.934566 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:50:08.938102 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:50:08.941887 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:50:08.945268 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:50:08.948847 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:50:08.951811 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:50:08.951990 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:50:08.955819 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:50:08.958277 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:50:08.961727 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:50:08.961870 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:50:08.965423 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:50:08.965537 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:50:08.969218 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:50:08.969328 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:50:08.972726 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:50:08.975682 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:50:08.979417 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:50:08.981541 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:50:08.984704 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:50:08.987914 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:50:08.988025 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:50:08.991403 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:50:08.991489 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:50:08.994461 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:50:08.994574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:50:08.998188 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:50:08.998293 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:50:09.015528 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:50:09.018916 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:50:09.021134 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:50:09.029828 ignition[1015]: INFO : Ignition 2.19.0 Oct 31 00:50:09.029828 ignition[1015]: INFO : Stage: umount Oct 31 00:50:09.029828 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:50:09.021332 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:50:09.040036 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:50:09.040036 ignition[1015]: INFO : umount: umount passed Oct 31 00:50:09.040036 ignition[1015]: INFO : Ignition finished successfully Oct 31 00:50:09.024785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:50:09.024890 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:50:09.031644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:50:09.031753 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:50:09.034989 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:50:09.035092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:50:09.040094 systemd[1]: Stopped target network.target - Network. Oct 31 00:50:09.043460 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:50:09.043514 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:50:09.046438 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:50:09.046486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:50:09.049805 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:50:09.049853 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:50:09.053001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:50:09.053054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:50:09.056700 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:50:09.059975 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:50:09.064543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:50:09.065418 systemd-networkd[773]: eth0: DHCPv6 lease lost Oct 31 00:50:09.067905 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:50:09.068043 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:50:09.070876 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:50:09.071002 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:50:09.075808 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:50:09.075866 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:50:09.092468 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:50:09.094121 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:50:09.094181 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:50:09.096292 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:50:09.096342 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:50:09.099319 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:50:09.099381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:50:09.102692 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:50:09.102742 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:50:09.104900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:50:09.121161 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:50:09.121291 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:50:09.128119 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:50:09.128306 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:50:09.131685 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:50:09.131736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:50:09.134910 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:50:09.134962 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:50:09.138688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:50:09.138740 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:50:09.142613 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:50:09.142662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:50:09.145857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:50:09.145907 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:50:09.159505 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:50:09.161840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:50:09.161897 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:50:09.165489 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 00:50:09.165541 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:50:09.169258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:50:09.169307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:50:09.171358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:50:09.171422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:50:09.175097 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:50:09.175203 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:50:09.271682 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:50:09.271822 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:50:09.275208 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:50:09.278326 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:50:09.278397 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:50:09.290507 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:50:09.298456 systemd[1]: Switching root. Oct 31 00:50:09.329766 systemd-journald[193]: Journal stopped Oct 31 00:50:11.756178 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 31 00:50:11.756261 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:50:11.756274 kernel: SELinux: policy capability open_perms=1 Oct 31 00:50:11.756285 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:50:11.756301 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:50:11.756312 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:50:11.756324 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:50:11.756338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:50:11.756349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:50:11.756361 kernel: audit: type=1403 audit(1761871810.745:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:50:11.758926 systemd[1]: Successfully loaded SELinux policy in 41.725ms. Oct 31 00:50:11.759038 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.394ms. Oct 31 00:50:11.759055 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:50:11.759068 systemd[1]: Detected virtualization kvm. Oct 31 00:50:11.759080 systemd[1]: Detected architecture x86-64. Oct 31 00:50:11.759093 systemd[1]: Detected first boot. Oct 31 00:50:11.759109 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:50:11.759121 zram_generator::config[1080]: No configuration found. Oct 31 00:50:11.759136 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:50:11.759148 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:50:11.759164 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 00:50:11.759183 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 00:50:11.759199 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 00:50:11.759213 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 00:50:11.759229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 00:50:11.759241 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 00:50:11.759253 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 00:50:11.759266 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 00:50:11.759278 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 00:50:11.759294 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:50:11.759307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:50:11.759320 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 00:50:11.759332 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 00:50:11.759347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 00:50:11.759360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:50:11.759389 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 00:50:11.759401 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:50:11.759413 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 00:50:11.759427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:50:11.759441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:50:11.759453 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:50:11.759468 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:50:11.759487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 00:50:11.759499 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 00:50:11.759511 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:50:11.759523 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:50:11.759535 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:50:11.759547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:50:11.759558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:50:11.759570 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 00:50:11.759584 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 00:50:11.759596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 00:50:11.759609 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 00:50:11.759620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:11.759632 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 00:50:11.759644 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 00:50:11.759657 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 00:50:11.759669 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 00:50:11.759680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:50:11.759695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:50:11.759708 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 00:50:11.759720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:50:11.759732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:50:11.759744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:50:11.759756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 00:50:11.759767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:50:11.759780 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:50:11.759795 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 00:50:11.759807 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 31 00:50:11.759820 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:50:11.759868 systemd-journald[1152]: Collecting audit messages is disabled. Oct 31 00:50:11.759901 kernel: fuse: init (API version 7.39) Oct 31 00:50:11.759915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:50:11.759927 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 00:50:11.759943 systemd-journald[1152]: Journal started Oct 31 00:50:11.759965 systemd-journald[1152]: Runtime Journal (/run/log/journal/34db216594684526a4f8d864fc28f1f5) is 6.0M, max 48.4M, 42.3M free. Oct 31 00:50:11.763384 kernel: loop: module loaded Oct 31 00:50:11.776007 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 00:50:11.783156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:50:11.787390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:11.792998 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:50:11.797602 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 00:50:11.799929 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 00:50:11.802811 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 00:50:11.805025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 00:50:11.807506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 00:50:11.810484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 00:50:11.812895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:50:11.819040 kernel: ACPI: bus type drm_connector registered Oct 31 00:50:11.816016 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 00:50:11.820239 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:50:11.820572 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 00:50:11.823124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:50:11.823349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:50:11.826363 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:50:11.826602 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:50:11.828952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:50:11.829211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:50:11.831962 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:50:11.832226 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 00:50:11.834774 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:50:11.835054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:50:11.837669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:50:11.840477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 00:50:11.843712 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 00:50:11.861064 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 00:50:11.875530 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 00:50:11.881448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 00:50:11.883415 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:50:11.885478 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 00:50:11.889202 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 00:50:11.891631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:50:11.892924 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 00:50:11.895217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:50:11.897914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:50:11.898980 systemd-journald[1152]: Time spent on flushing to /var/log/journal/34db216594684526a4f8d864fc28f1f5 is 13.100ms for 943 entries. Oct 31 00:50:11.898980 systemd-journald[1152]: System Journal (/var/log/journal/34db216594684526a4f8d864fc28f1f5) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:50:12.073177 systemd-journald[1152]: Received client request to flush runtime journal. Oct 31 00:50:11.903185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:50:11.909844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:50:11.912556 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 00:50:11.914838 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 00:50:11.918315 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 00:50:11.927106 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 00:50:11.938561 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 00:50:11.961570 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 00:50:11.996847 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Oct 31 00:50:11.996862 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Oct 31 00:50:12.002296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:50:12.004733 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:50:12.018547 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 00:50:12.050869 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 00:50:12.075686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:50:12.078798 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 00:50:12.097766 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Oct 31 00:50:12.097790 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Oct 31 00:50:12.104347 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:50:12.939256 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 00:50:12.951576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:50:12.975922 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Oct 31 00:50:12.992950 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:50:13.025575 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:50:13.028184 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 31 00:50:13.038394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1261) Oct 31 00:50:13.055571 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 00:50:13.078394 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 00:50:13.085397 kernel: ACPI: button: Power Button [PWRF] Oct 31 00:50:13.096441 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 00:50:13.100815 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 00:50:13.115318 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 00:50:13.118699 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 00:50:13.128412 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 00:50:13.150275 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 00:50:13.152556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:50:13.158773 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:50:13.192210 systemd-networkd[1263]: lo: Link UP Oct 31 00:50:13.192220 systemd-networkd[1263]: lo: Gained carrier Oct 31 00:50:13.194246 systemd-networkd[1263]: Enumeration completed Oct 31 00:50:13.194347 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:50:13.196086 systemd-networkd[1263]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:50:13.196090 systemd-networkd[1263]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:50:13.196760 systemd-networkd[1263]: eth0: Link UP Oct 31 00:50:13.196764 systemd-networkd[1263]: eth0: Gained carrier Oct 31 00:50:13.196776 systemd-networkd[1263]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:50:13.264568 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 00:50:13.266414 systemd-networkd[1263]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:50:13.278937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:50:13.281246 kernel: kvm_amd: TSC scaling supported Oct 31 00:50:13.281274 kernel: kvm_amd: Nested Virtualization enabled Oct 31 00:50:13.281301 kernel: kvm_amd: Nested Paging enabled Oct 31 00:50:13.282292 kernel: kvm_amd: LBR virtualization supported Oct 31 00:50:13.283427 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 31 00:50:13.284612 kernel: kvm_amd: Virtual GIF supported Oct 31 00:50:13.304400 kernel: EDAC MC: Ver: 3.0.0 Oct 31 00:50:13.342647 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 00:50:13.357516 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 00:50:13.365610 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:50:13.398798 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 00:50:13.401245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:50:13.412497 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 00:50:13.419196 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:50:13.452731 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 00:50:13.454949 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:50:13.456965 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:50:13.456994 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:50:13.502459 systemd[1]: Reached target machines.target - Containers. Oct 31 00:50:13.505149 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 00:50:13.519497 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 00:50:13.522699 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 00:50:13.524528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:50:13.525450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 00:50:13.529647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 00:50:13.534615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 00:50:13.537640 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 00:50:13.604396 kernel: loop0: detected capacity change from 0 to 142488 Oct 31 00:50:13.605419 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 00:50:13.626429 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:50:13.670402 kernel: loop1: detected capacity change from 0 to 224512 Oct 31 00:50:13.715398 kernel: loop2: detected capacity change from 0 to 140768 Oct 31 00:50:13.926390 kernel: loop3: detected capacity change from 0 to 142488 Oct 31 00:50:13.937582 kernel: loop4: detected capacity change from 0 to 224512 Oct 31 00:50:13.946402 kernel: loop5: detected capacity change from 0 to 140768 Oct 31 00:50:13.953929 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 31 00:50:13.954690 (sd-merge)[1312]: Merged extensions into '/usr'. Oct 31 00:50:13.958744 systemd[1]: Reloading requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 00:50:13.959117 systemd[1]: Reloading... Oct 31 00:50:14.007594 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:50:14.023197 zram_generator::config[1344]: No configuration found. Oct 31 00:50:14.223621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:50:14.291608 systemd[1]: Reloading finished in 331 ms. Oct 31 00:50:14.314481 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:50:14.315975 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 00:50:14.318427 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 00:50:14.321066 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 00:50:14.339683 systemd[1]: Starting ensure-sysext.service... Oct 31 00:50:14.342688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:50:14.350081 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Oct 31 00:50:14.350097 systemd[1]: Reloading... Oct 31 00:50:14.372035 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:50:14.372432 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 00:50:14.373441 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:50:14.373767 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Oct 31 00:50:14.373862 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Oct 31 00:50:14.377574 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:50:14.377586 systemd-tmpfiles[1389]: Skipping /boot Oct 31 00:50:14.390958 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:50:14.391068 systemd-tmpfiles[1389]: Skipping /boot Oct 31 00:50:14.416404 zram_generator::config[1421]: No configuration found. Oct 31 00:50:14.534419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:50:14.602624 systemd[1]: Reloading finished in 252 ms. Oct 31 00:50:14.620362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:50:14.640150 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:50:14.643483 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 00:50:14.646668 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 00:50:14.652496 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:50:14.656036 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 00:50:14.661656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:14.661836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:50:14.663195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:50:14.666559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:50:14.674077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:50:14.677515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:50:14.677653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:14.679658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:50:14.679926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:50:14.682841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:50:14.683058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:50:14.686694 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:50:14.688844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:50:14.692659 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 00:50:14.704630 augenrules[1495]: No rules Oct 31 00:50:14.707000 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:50:14.715277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:14.715517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:50:14.718030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:50:14.749700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:50:14.754016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:50:14.755855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:50:14.758292 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 00:50:14.760008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:14.762363 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 00:50:14.765399 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 00:50:14.768118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:50:14.768336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:50:14.770880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:50:14.771099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:50:14.773657 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:50:14.773929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:50:14.776851 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 00:50:14.786995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:14.787182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:50:14.797610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:50:14.800668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:50:14.803280 systemd-resolved[1467]: Positive Trust Anchors: Oct 31 00:50:14.803300 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:50:14.803332 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:50:14.803822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:50:14.809929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:50:14.810278 systemd-resolved[1467]: Defaulting to hostname 'linux'. Oct 31 00:50:14.810562 systemd-networkd[1263]: eth0: Gained IPv6LL Oct 31 00:50:14.811969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:50:14.812033 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:50:14.812057 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:50:14.812896 systemd[1]: Finished ensure-sysext.service. Oct 31 00:50:14.814979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:50:14.815196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:50:14.817307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:50:14.819836 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 00:50:14.822689 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:50:14.822912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:50:14.825088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:50:14.825298 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:50:14.827746 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:50:14.827989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:50:14.836629 systemd[1]: Reached target network.target - Network. Oct 31 00:50:14.838228 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 00:50:14.840148 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:50:14.842342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:50:14.842429 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:50:14.853595 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 00:50:14.914712 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 00:50:14.932131 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:50:14.932759 systemd-timesyncd[1537]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:50:14.932816 systemd-timesyncd[1537]: Initial clock synchronization to Fri 2025-10-31 00:50:15.101906 UTC. Oct 31 00:50:14.934144 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 00:50:14.936228 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 00:50:14.938315 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 00:50:14.940439 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:50:14.940467 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:50:14.941984 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 00:50:14.943850 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 00:50:14.945733 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 00:50:14.947864 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:50:14.950422 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 00:50:14.954317 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 00:50:14.957477 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 00:50:14.963739 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 00:50:14.965733 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:50:14.967531 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:50:14.969332 systemd[1]: System is tainted: cgroupsv1 Oct 31 00:50:14.969383 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:50:14.969408 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:50:14.970649 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 00:50:14.973575 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 00:50:14.976498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 00:50:14.981284 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 00:50:14.984578 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 00:50:14.986344 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 00:50:14.988684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:14.994563 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 00:50:14.997627 jq[1545]: false Oct 31 00:50:14.998344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 00:50:15.002475 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 00:50:15.007169 dbus-daemon[1543]: [system] SELinux support is enabled Oct 31 00:50:15.007965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 00:50:15.012639 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 00:50:15.015492 extend-filesystems[1547]: Found loop3 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found loop4 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found loop5 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found sr0 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda1 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda2 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda3 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found usr Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda4 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda6 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda7 Oct 31 00:50:15.015492 extend-filesystems[1547]: Found vda9 Oct 31 00:50:15.015492 extend-filesystems[1547]: Checking size of /dev/vda9 Oct 31 00:50:15.019428 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 00:50:15.021627 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:50:15.024597 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 00:50:15.029536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 00:50:15.050245 update_engine[1565]: I20251031 00:50:15.049983 1565 main.cc:92] Flatcar Update Engine starting Oct 31 00:50:15.035115 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 00:50:15.056728 update_engine[1565]: I20251031 00:50:15.051265 1565 update_check_scheduler.cc:74] Next update check in 7m6s Oct 31 00:50:15.060885 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:50:15.061260 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 00:50:15.062084 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:50:15.062464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 00:50:15.064972 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 00:50:15.067502 jq[1571]: true Oct 31 00:50:15.070296 extend-filesystems[1547]: Resized partition /dev/vda9 Oct 31 00:50:15.078411 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Oct 31 00:50:15.084219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:50:15.084752 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 00:50:15.093409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1253) Oct 31 00:50:15.099753 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 00:50:15.124436 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:50:15.128500 jq[1588]: true Oct 31 00:50:15.130142 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 00:50:15.130168 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 00:50:15.134676 systemd-logind[1562]: New seat seat0. Oct 31 00:50:15.137259 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 00:50:15.150765 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 00:50:15.151170 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 00:50:15.156904 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 31 00:50:15.161831 tar[1587]: linux-amd64/LICENSE Oct 31 00:50:15.162213 tar[1587]: linux-amd64/helm Oct 31 00:50:15.172031 systemd[1]: Started update-engine.service - Update Engine. Oct 31 00:50:15.174843 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 00:50:15.175026 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:50:15.175165 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 00:50:15.177508 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:50:15.177624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 00:50:15.180468 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 00:50:15.191676 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 00:50:15.201417 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:50:15.227302 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 00:50:15.238855 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 00:50:15.252682 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:50:15.253010 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 00:50:15.263572 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:50:15.265672 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 00:50:15.325840 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 00:50:15.342901 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 00:50:15.355802 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 00:50:15.358162 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 00:50:15.369408 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:50:15.408160 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:50:15.408160 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:50:15.408160 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:50:15.417454 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Oct 31 00:50:15.410802 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:50:15.411150 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 00:50:15.509361 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:50:15.512887 containerd[1589]: time="2025-10-31T00:50:15.510908168Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 00:50:15.512779 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 00:50:15.522905 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 00:50:15.544221 containerd[1589]: time="2025-10-31T00:50:15.544167113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546280105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546330174Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546349786Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546574248Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546593583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546665833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546678447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546973911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.546990965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.547007068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547409 containerd[1589]: time="2025-10-31T00:50:15.547018609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547652 containerd[1589]: time="2025-10-31T00:50:15.547124579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547706 containerd[1589]: time="2025-10-31T00:50:15.547689314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:15.547960 containerd[1589]: time="2025-10-31T00:50:15.547940560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:15.548016 containerd[1589]: time="2025-10-31T00:50:15.548003816Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:50:15.548186 containerd[1589]: time="2025-10-31T00:50:15.548170066Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:50:15.548304 containerd[1589]: time="2025-10-31T00:50:15.548287750Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:50:15.562480 containerd[1589]: time="2025-10-31T00:50:15.562383580Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:50:15.562603 containerd[1589]: time="2025-10-31T00:50:15.562519096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:50:15.562603 containerd[1589]: time="2025-10-31T00:50:15.562538882Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 00:50:15.562603 containerd[1589]: time="2025-10-31T00:50:15.562554719Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 00:50:15.562603 containerd[1589]: time="2025-10-31T00:50:15.562568040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:50:15.562832 containerd[1589]: time="2025-10-31T00:50:15.562794630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:50:15.564826 containerd[1589]: time="2025-10-31T00:50:15.564786028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565257921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565292663Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565312163Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565333096Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565351624Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565370325Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565407238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565420 containerd[1589]: time="2025-10-31T00:50:15.565429663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565448446Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565469695Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565497779Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565526834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565546069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565564207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565582643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565599554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565617898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565634983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565651966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565669032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565688756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.565704 containerd[1589]: time="2025-10-31T00:50:15.565704061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565720932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565738467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565765282Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565797212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565813316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565828937Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565897074Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565921720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565936267Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565953936Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565968055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.565991145Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.566005131Z" level=info msg="NRI interface is disabled by configuration." Oct 31 00:50:15.566020 containerd[1589]: time="2025-10-31T00:50:15.566019311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:50:15.566699 containerd[1589]: time="2025-10-31T00:50:15.566411914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:50:15.566699 containerd[1589]: time="2025-10-31T00:50:15.566484133Z" level=info msg="Connect containerd service" Oct 31 00:50:15.566699 containerd[1589]: time="2025-10-31T00:50:15.566554356Z" level=info msg="using legacy CRI server" Oct 31 00:50:15.566699 containerd[1589]: time="2025-10-31T00:50:15.566564260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 00:50:15.566699 containerd[1589]: time="2025-10-31T00:50:15.566672992Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:50:15.567334 containerd[1589]: time="2025-10-31T00:50:15.567304605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:50:15.567786 containerd[1589]: time="2025-10-31T00:50:15.567437800Z" level=info msg="Start subscribing containerd event" Oct 31 00:50:15.567786 containerd[1589]: time="2025-10-31T00:50:15.567506069Z" level=info msg="Start recovering state" Oct 31 00:50:15.567786 containerd[1589]: time="2025-10-31T00:50:15.567598677Z" level=info msg="Start event monitor" Oct 31 00:50:15.567786 containerd[1589]: time="2025-10-31T00:50:15.567621932Z" level=info msg="Start snapshots syncer" Oct 31 00:50:15.567786 containerd[1589]: time="2025-10-31T00:50:15.567633626Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:50:15.567786 containerd[1589]: time="2025-10-31T00:50:15.567643018Z" level=info msg="Start streaming server" Oct 31 00:50:15.567911 containerd[1589]: time="2025-10-31T00:50:15.567823222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:50:15.567911 containerd[1589]: time="2025-10-31T00:50:15.567887286Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:50:15.567954 containerd[1589]: time="2025-10-31T00:50:15.567944169Z" level=info msg="containerd successfully booted in 0.058337s" Oct 31 00:50:15.568273 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 00:50:15.657148 tar[1587]: linux-amd64/README.md Oct 31 00:50:15.670290 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 00:50:16.096835 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 00:50:16.110725 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:55594.service - OpenSSH per-connection server daemon (10.0.0.1:55594). Oct 31 00:50:16.147468 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 55594 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:16.149481 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:16.159105 systemd-logind[1562]: New session 1 of user core. Oct 31 00:50:16.160221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 00:50:16.171595 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 00:50:16.185733 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 00:50:16.201073 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 00:50:16.203420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:16.206578 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 00:50:16.209297 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:50:16.215476 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:50:16.340183 systemd[1681]: Queued start job for default target default.target. Oct 31 00:50:16.340607 systemd[1681]: Created slice app.slice - User Application Slice. Oct 31 00:50:16.340625 systemd[1681]: Reached target paths.target - Paths. Oct 31 00:50:16.340638 systemd[1681]: Reached target timers.target - Timers. Oct 31 00:50:16.354516 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 00:50:16.362211 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 00:50:16.362306 systemd[1681]: Reached target sockets.target - Sockets. Oct 31 00:50:16.362326 systemd[1681]: Reached target basic.target - Basic System. Oct 31 00:50:16.362382 systemd[1681]: Reached target default.target - Main User Target. Oct 31 00:50:16.362446 systemd[1681]: Startup finished in 139ms. Oct 31 00:50:16.362676 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 00:50:16.366318 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 00:50:16.370475 systemd[1]: Startup finished in 8.878s (kernel) + 5.665s (userspace) = 14.544s. Oct 31 00:50:16.428767 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:55602.service - OpenSSH per-connection server daemon (10.0.0.1:55602). Oct 31 00:50:16.458369 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 55602 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:16.460308 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:16.465182 systemd-logind[1562]: New session 2 of user core. Oct 31 00:50:16.471722 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 00:50:16.530168 sshd[1704]: pam_unix(sshd:session): session closed for user core Oct 31 00:50:16.540618 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:55612.service - OpenSSH per-connection server daemon (10.0.0.1:55612). Oct 31 00:50:16.541165 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:55602.service: Deactivated successfully. Oct 31 00:50:16.543138 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:50:16.544928 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:50:16.546776 systemd-logind[1562]: Removed session 2. Oct 31 00:50:16.568372 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 55612 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:16.570056 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:16.574358 systemd-logind[1562]: New session 3 of user core. Oct 31 00:50:16.588694 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 00:50:16.641269 sshd[1711]: pam_unix(sshd:session): session closed for user core Oct 31 00:50:16.647826 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:55622.service - OpenSSH per-connection server daemon (10.0.0.1:55622). Oct 31 00:50:16.648464 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:55612.service: Deactivated successfully. Oct 31 00:50:16.652363 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:50:16.653905 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:50:16.656404 systemd-logind[1562]: Removed session 3. Oct 31 00:50:16.660462 kubelet[1682]: E1031 00:50:16.660417 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:50:16.664479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:50:16.664840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:50:16.677104 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 55622 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:16.678596 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:16.683081 systemd-logind[1562]: New session 4 of user core. Oct 31 00:50:16.699634 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 00:50:16.755926 sshd[1719]: pam_unix(sshd:session): session closed for user core Oct 31 00:50:16.768687 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:55630.service - OpenSSH per-connection server daemon (10.0.0.1:55630). Oct 31 00:50:16.769277 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:55622.service: Deactivated successfully. Oct 31 00:50:16.771968 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:50:16.773193 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:50:16.774433 systemd-logind[1562]: Removed session 4. Oct 31 00:50:16.795147 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 55630 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:16.796566 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:16.800503 systemd-logind[1562]: New session 5 of user core. Oct 31 00:50:16.809630 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 00:50:16.867128 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:50:16.867564 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:50:16.889879 sudo[1736]: pam_unix(sudo:session): session closed for user root Oct 31 00:50:16.891851 sshd[1729]: pam_unix(sshd:session): session closed for user core Oct 31 00:50:16.913645 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:55638.service - OpenSSH per-connection server daemon (10.0.0.1:55638). Oct 31 00:50:16.914239 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:55630.service: Deactivated successfully. Oct 31 00:50:16.916708 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:50:16.916784 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:50:16.918756 systemd-logind[1562]: Removed session 5. Oct 31 00:50:16.941596 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 55638 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:16.943148 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:16.947044 systemd-logind[1562]: New session 6 of user core. Oct 31 00:50:16.956634 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 00:50:17.011437 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:50:17.011782 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:50:17.015449 sudo[1746]: pam_unix(sudo:session): session closed for user root Oct 31 00:50:17.021748 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:50:17.022155 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:50:17.045605 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 00:50:17.047275 auditctl[1749]: No rules Oct 31 00:50:17.048606 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:50:17.048963 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 00:50:17.051071 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:50:17.081809 augenrules[1768]: No rules Oct 31 00:50:17.082858 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:50:17.084166 sudo[1745]: pam_unix(sudo:session): session closed for user root Oct 31 00:50:17.086323 sshd[1738]: pam_unix(sshd:session): session closed for user core Oct 31 00:50:17.101659 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:55644.service - OpenSSH per-connection server daemon (10.0.0.1:55644). Oct 31 00:50:17.102133 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:55638.service: Deactivated successfully. Oct 31 00:50:17.104548 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:50:17.105185 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:50:17.106236 systemd-logind[1562]: Removed session 6. Oct 31 00:50:17.131649 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 55644 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:50:17.133369 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:50:17.137433 systemd-logind[1562]: New session 7 of user core. Oct 31 00:50:17.148643 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 00:50:17.202682 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:50:17.203053 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:50:17.483591 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 00:50:17.483818 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 00:50:17.751453 dockerd[1799]: time="2025-10-31T00:50:17.751295680Z" level=info msg="Starting up" Oct 31 00:50:19.342328 dockerd[1799]: time="2025-10-31T00:50:19.342263434Z" level=info msg="Loading containers: start." Oct 31 00:50:19.455407 kernel: Initializing XFRM netlink socket Oct 31 00:50:19.537952 systemd-networkd[1263]: docker0: Link UP Oct 31 00:50:19.758280 dockerd[1799]: time="2025-10-31T00:50:19.758137545Z" level=info msg="Loading containers: done." Oct 31 00:50:19.773185 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1682981938-merged.mount: Deactivated successfully. Oct 31 00:50:19.919546 dockerd[1799]: time="2025-10-31T00:50:19.919476134Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:50:19.919685 dockerd[1799]: time="2025-10-31T00:50:19.919624141Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 00:50:19.920051 dockerd[1799]: time="2025-10-31T00:50:19.920014039Z" level=info msg="Daemon has completed initialization" Oct 31 00:50:20.033515 dockerd[1799]: time="2025-10-31T00:50:20.032810541Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:50:20.033084 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 00:50:20.744937 containerd[1589]: time="2025-10-31T00:50:20.744899016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 00:50:22.196209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3315648611.mount: Deactivated successfully. Oct 31 00:50:24.332243 containerd[1589]: time="2025-10-31T00:50:24.332181199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:24.332965 containerd[1589]: time="2025-10-31T00:50:24.332891895Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 31 00:50:24.333931 containerd[1589]: time="2025-10-31T00:50:24.333894065Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:24.336420 containerd[1589]: time="2025-10-31T00:50:24.336367858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:24.337430 containerd[1589]: time="2025-10-31T00:50:24.337397696Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.592460938s" Oct 31 00:50:24.337472 containerd[1589]: time="2025-10-31T00:50:24.337437229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 31 00:50:24.337945 containerd[1589]: time="2025-10-31T00:50:24.337918861Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 00:50:25.475687 containerd[1589]: time="2025-10-31T00:50:25.475593290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:25.476363 containerd[1589]: time="2025-10-31T00:50:25.476264668Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 31 00:50:25.477634 containerd[1589]: time="2025-10-31T00:50:25.477590337Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:25.480861 containerd[1589]: time="2025-10-31T00:50:25.480810442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:25.482172 containerd[1589]: time="2025-10-31T00:50:25.482068279Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.144038457s" Oct 31 00:50:25.482172 containerd[1589]: time="2025-10-31T00:50:25.482161891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 31 00:50:25.482747 containerd[1589]: time="2025-10-31T00:50:25.482654053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 00:50:26.661685 containerd[1589]: time="2025-10-31T00:50:26.661622828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:26.662638 containerd[1589]: time="2025-10-31T00:50:26.662563964Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 31 00:50:26.663763 containerd[1589]: time="2025-10-31T00:50:26.663727907Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:26.667196 containerd[1589]: time="2025-10-31T00:50:26.667171601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:26.668393 containerd[1589]: time="2025-10-31T00:50:26.668334707Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.185647246s" Oct 31 00:50:26.668393 containerd[1589]: time="2025-10-31T00:50:26.668389002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 31 00:50:26.668938 containerd[1589]: time="2025-10-31T00:50:26.668911590Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 00:50:26.914983 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:50:26.924538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:27.119495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:27.131758 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:50:27.294260 kubelet[2027]: E1031 00:50:27.294030 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:50:27.301274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:50:27.301601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:50:29.139856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151448759.mount: Deactivated successfully. Oct 31 00:50:29.871201 containerd[1589]: time="2025-10-31T00:50:29.871114132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:29.872000 containerd[1589]: time="2025-10-31T00:50:29.871924330Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 31 00:50:29.873317 containerd[1589]: time="2025-10-31T00:50:29.873273710Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:29.875872 containerd[1589]: time="2025-10-31T00:50:29.875806450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:29.876308 containerd[1589]: time="2025-10-31T00:50:29.876272496Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.207327284s" Oct 31 00:50:29.876357 containerd[1589]: time="2025-10-31T00:50:29.876305475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 31 00:50:29.876881 containerd[1589]: time="2025-10-31T00:50:29.876840605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 00:50:30.487357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601286282.mount: Deactivated successfully. Oct 31 00:50:32.261620 containerd[1589]: time="2025-10-31T00:50:32.261545810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:32.262402 containerd[1589]: time="2025-10-31T00:50:32.262324243Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 31 00:50:32.263791 containerd[1589]: time="2025-10-31T00:50:32.263738354Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:32.267344 containerd[1589]: time="2025-10-31T00:50:32.267287584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:32.268392 containerd[1589]: time="2025-10-31T00:50:32.268342016Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.391466009s" Oct 31 00:50:32.268453 containerd[1589]: time="2025-10-31T00:50:32.268393876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 31 00:50:32.268960 containerd[1589]: time="2025-10-31T00:50:32.268897236Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 00:50:33.092287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915783383.mount: Deactivated successfully. Oct 31 00:50:33.096069 containerd[1589]: time="2025-10-31T00:50:33.096028869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:33.096764 containerd[1589]: time="2025-10-31T00:50:33.096711179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 00:50:33.097730 containerd[1589]: time="2025-10-31T00:50:33.097695279Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:33.099805 containerd[1589]: time="2025-10-31T00:50:33.099762473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:33.100458 containerd[1589]: time="2025-10-31T00:50:33.100429655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 831.476133ms" Oct 31 00:50:33.100458 containerd[1589]: time="2025-10-31T00:50:33.100460322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 00:50:33.100949 containerd[1589]: time="2025-10-31T00:50:33.100914709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 00:50:33.844821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020455138.mount: Deactivated successfully. Oct 31 00:50:35.472942 containerd[1589]: time="2025-10-31T00:50:35.472864122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:35.473787 containerd[1589]: time="2025-10-31T00:50:35.473705378Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 31 00:50:35.475036 containerd[1589]: time="2025-10-31T00:50:35.475003477Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:35.478453 containerd[1589]: time="2025-10-31T00:50:35.478421865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:35.479669 containerd[1589]: time="2025-10-31T00:50:35.479628549Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.378678369s" Oct 31 00:50:35.479714 containerd[1589]: time="2025-10-31T00:50:35.479665722Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 31 00:50:37.551777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:50:37.561519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:37.734705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:37.739067 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:50:37.777461 kubelet[2188]: E1031 00:50:37.777403 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:50:37.781890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:50:37.782164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:50:38.494790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:38.511592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:38.536587 systemd[1]: Reloading requested from client PID 2205 ('systemctl') (unit session-7.scope)... Oct 31 00:50:38.536604 systemd[1]: Reloading... Oct 31 00:50:38.624441 zram_generator::config[2245]: No configuration found. Oct 31 00:50:39.759555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:50:39.837740 systemd[1]: Reloading finished in 1300 ms. Oct 31 00:50:39.881289 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 00:50:39.881413 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 00:50:39.881767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:39.892753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:40.051428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:40.061785 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:50:40.095928 kubelet[2304]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:50:40.095928 kubelet[2304]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:50:40.095928 kubelet[2304]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:50:40.096398 kubelet[2304]: I1031 00:50:40.096057 2304 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:50:40.465464 kubelet[2304]: I1031 00:50:40.465341 2304 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:50:40.465464 kubelet[2304]: I1031 00:50:40.465382 2304 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:50:40.466025 kubelet[2304]: I1031 00:50:40.465996 2304 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:50:41.670990 kubelet[2304]: E1031 00:50:41.670943 2304 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:41.673673 kubelet[2304]: I1031 00:50:41.673632 2304 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:50:41.680626 kubelet[2304]: E1031 00:50:41.680579 2304 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:50:41.680626 kubelet[2304]: I1031 00:50:41.680613 2304 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:50:41.685906 kubelet[2304]: I1031 00:50:41.685868 2304 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:50:41.687417 kubelet[2304]: I1031 00:50:41.687348 2304 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:50:41.687577 kubelet[2304]: I1031 00:50:41.687403 2304 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 00:50:41.687687 kubelet[2304]: I1031 00:50:41.687579 2304 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:50:41.687687 kubelet[2304]: I1031 00:50:41.687591 2304 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:50:41.687758 kubelet[2304]: I1031 00:50:41.687743 2304 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:50:41.690345 kubelet[2304]: I1031 00:50:41.690315 2304 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:50:41.691728 kubelet[2304]: I1031 00:50:41.691705 2304 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:50:41.691788 kubelet[2304]: I1031 00:50:41.691756 2304 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:50:41.691788 kubelet[2304]: I1031 00:50:41.691768 2304 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:50:41.693591 kubelet[2304]: W1031 00:50:41.693531 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:41.693591 kubelet[2304]: E1031 00:50:41.693585 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:41.698392 kubelet[2304]: W1031 00:50:41.696286 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:41.698392 kubelet[2304]: E1031 00:50:41.696408 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:41.698392 kubelet[2304]: I1031 00:50:41.696606 2304 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:50:41.698392 kubelet[2304]: I1031 00:50:41.697093 2304 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:50:41.698392 kubelet[2304]: W1031 00:50:41.697589 2304 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:50:41.699300 kubelet[2304]: I1031 00:50:41.699279 2304 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:50:41.699338 kubelet[2304]: I1031 00:50:41.699319 2304 server.go:1287] "Started kubelet" Oct 31 00:50:41.700987 kubelet[2304]: I1031 00:50:41.699711 2304 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:50:41.700987 kubelet[2304]: I1031 00:50:41.700768 2304 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:50:41.701064 kubelet[2304]: I1031 00:50:41.701046 2304 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:50:41.702194 kubelet[2304]: I1031 00:50:41.702167 2304 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:50:41.703904 kubelet[2304]: I1031 00:50:41.703783 2304 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:50:41.704002 kubelet[2304]: I1031 00:50:41.703974 2304 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:50:41.708157 kubelet[2304]: E1031 00:50:41.706188 2304 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736d1af03fe980 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:50:41.699293568 +0000 UTC m=+1.633905535,LastTimestamp:2025-10-31 00:50:41.699293568 +0000 UTC m=+1.633905535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:50:41.708157 kubelet[2304]: I1031 00:50:41.707989 2304 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:50:41.708157 kubelet[2304]: E1031 00:50:41.708043 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:41.708157 kubelet[2304]: I1031 00:50:41.708071 2304 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:50:41.708157 kubelet[2304]: E1031 00:50:41.708113 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="200ms" Oct 31 00:50:41.708157 kubelet[2304]: I1031 00:50:41.708137 2304 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:50:41.708773 kubelet[2304]: W1031 00:50:41.708730 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:41.708820 kubelet[2304]: I1031 00:50:41.708797 2304 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:50:41.708912 kubelet[2304]: I1031 00:50:41.708887 2304 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:50:41.709019 kubelet[2304]: E1031 00:50:41.708789 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:41.709358 kubelet[2304]: E1031 00:50:41.709336 2304 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:50:41.709852 kubelet[2304]: I1031 00:50:41.709834 2304 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:50:41.724296 kubelet[2304]: I1031 00:50:41.724236 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:50:41.725683 kubelet[2304]: I1031 00:50:41.725641 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:50:41.725683 kubelet[2304]: I1031 00:50:41.725681 2304 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:50:41.725768 kubelet[2304]: I1031 00:50:41.725707 2304 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:50:41.725768 kubelet[2304]: I1031 00:50:41.725717 2304 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:50:41.725835 kubelet[2304]: E1031 00:50:41.725790 2304 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:50:41.730665 kubelet[2304]: W1031 00:50:41.730578 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:41.730665 kubelet[2304]: E1031 00:50:41.730614 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:41.736048 kubelet[2304]: I1031 00:50:41.735819 2304 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:50:41.736048 kubelet[2304]: I1031 00:50:41.735837 2304 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:50:41.736048 kubelet[2304]: I1031 00:50:41.735854 2304 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:50:41.809256 kubelet[2304]: E1031 00:50:41.809199 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:41.826428 kubelet[2304]: E1031 00:50:41.826385 2304 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:50:41.909119 kubelet[2304]: E1031 00:50:41.909071 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="400ms" Oct 31 00:50:41.910060 kubelet[2304]: E1031 00:50:41.910028 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:42.010626 kubelet[2304]: E1031 00:50:42.010492 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:42.020591 kubelet[2304]: I1031 00:50:42.020553 2304 policy_none.go:49] "None policy: Start" Oct 31 00:50:42.020653 kubelet[2304]: I1031 00:50:42.020620 2304 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:50:42.020653 kubelet[2304]: I1031 00:50:42.020644 2304 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:50:42.026761 kubelet[2304]: E1031 00:50:42.026720 2304 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:50:42.110914 kubelet[2304]: E1031 00:50:42.110870 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:42.211011 kubelet[2304]: E1031 00:50:42.210963 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:42.309975 kubelet[2304]: E1031 00:50:42.309839 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="800ms" Oct 31 00:50:42.311853 kubelet[2304]: E1031 00:50:42.311813 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:42.369866 kubelet[2304]: I1031 00:50:42.369834 2304 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:50:42.370077 kubelet[2304]: I1031 00:50:42.370055 2304 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:50:42.370115 kubelet[2304]: I1031 00:50:42.370070 2304 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:50:42.371291 kubelet[2304]: I1031 00:50:42.370955 2304 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:50:42.372222 kubelet[2304]: E1031 00:50:42.372196 2304 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:50:42.372279 kubelet[2304]: E1031 00:50:42.372242 2304 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:50:42.434267 kubelet[2304]: E1031 00:50:42.433474 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:42.434402 kubelet[2304]: E1031 00:50:42.434314 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:42.436868 kubelet[2304]: E1031 00:50:42.436851 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:42.472151 kubelet[2304]: I1031 00:50:42.472114 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:50:42.472539 kubelet[2304]: E1031 00:50:42.472511 2304 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Oct 31 00:50:42.512882 kubelet[2304]: I1031 00:50:42.512845 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:42.512946 kubelet[2304]: I1031 00:50:42.512879 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:42.512946 kubelet[2304]: I1031 00:50:42.512909 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/327c0a53517aefe84db56d87af9081f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"327c0a53517aefe84db56d87af9081f7\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:42.512946 kubelet[2304]: I1031 00:50:42.512927 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/327c0a53517aefe84db56d87af9081f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"327c0a53517aefe84db56d87af9081f7\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:42.512946 kubelet[2304]: I1031 00:50:42.512940 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:42.513049 kubelet[2304]: I1031 00:50:42.512955 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:42.513049 kubelet[2304]: I1031 00:50:42.512969 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:50:42.513049 kubelet[2304]: I1031 00:50:42.513001 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/327c0a53517aefe84db56d87af9081f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"327c0a53517aefe84db56d87af9081f7\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:42.513118 kubelet[2304]: I1031 00:50:42.513068 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:42.673928 kubelet[2304]: I1031 00:50:42.673820 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:50:42.674335 kubelet[2304]: E1031 00:50:42.674202 2304 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Oct 31 00:50:42.734759 kubelet[2304]: E1031 00:50:42.734713 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:42.734857 kubelet[2304]: E1031 00:50:42.734713 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:42.735452 containerd[1589]: time="2025-10-31T00:50:42.735414425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:327c0a53517aefe84db56d87af9081f7,Namespace:kube-system,Attempt:0,}" Oct 31 00:50:42.735846 containerd[1589]: time="2025-10-31T00:50:42.735476918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 00:50:42.737570 kubelet[2304]: E1031 00:50:42.737542 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:42.737956 containerd[1589]: time="2025-10-31T00:50:42.737929413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 00:50:42.865938 kubelet[2304]: W1031 00:50:42.865868 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:42.866021 kubelet[2304]: E1031 00:50:42.865935 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:43.076200 kubelet[2304]: I1031 00:50:43.076099 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:50:43.076476 kubelet[2304]: E1031 00:50:43.076435 2304 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Oct 31 00:50:43.111051 kubelet[2304]: E1031 00:50:43.111010 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="1.6s" Oct 31 00:50:43.121746 kubelet[2304]: W1031 00:50:43.121673 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:43.121746 kubelet[2304]: E1031 00:50:43.121740 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:43.212148 kubelet[2304]: W1031 00:50:43.212072 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:43.212197 kubelet[2304]: E1031 00:50:43.212151 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:43.265283 kubelet[2304]: W1031 00:50:43.265192 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Oct 31 00:50:43.265283 kubelet[2304]: E1031 00:50:43.265266 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:50:43.328074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542587321.mount: Deactivated successfully. Oct 31 00:50:43.336181 containerd[1589]: time="2025-10-31T00:50:43.336122213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:50:43.337126 containerd[1589]: time="2025-10-31T00:50:43.337082705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:50:43.337940 containerd[1589]: time="2025-10-31T00:50:43.337888532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:50:43.338916 containerd[1589]: time="2025-10-31T00:50:43.338883607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:50:43.339689 containerd[1589]: time="2025-10-31T00:50:43.339648486Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 31 00:50:43.340541 containerd[1589]: time="2025-10-31T00:50:43.340508860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:50:43.341461 containerd[1589]: time="2025-10-31T00:50:43.341425209Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:50:43.345239 containerd[1589]: time="2025-10-31T00:50:43.345201125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:50:43.346041 containerd[1589]: time="2025-10-31T00:50:43.346000154Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.435012ms" Oct 31 00:50:43.347265 containerd[1589]: time="2025-10-31T00:50:43.347219925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.728321ms" Oct 31 00:50:43.348331 containerd[1589]: time="2025-10-31T00:50:43.348299610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.318411ms" Oct 31 00:50:43.497533 containerd[1589]: time="2025-10-31T00:50:43.497435046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:50:43.497533 containerd[1589]: time="2025-10-31T00:50:43.497508891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:50:43.497533 containerd[1589]: time="2025-10-31T00:50:43.497436238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:50:43.497533 containerd[1589]: time="2025-10-31T00:50:43.497512340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:50:43.497533 containerd[1589]: time="2025-10-31T00:50:43.497531405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:43.497753 containerd[1589]: time="2025-10-31T00:50:43.497630731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:43.498516 containerd[1589]: time="2025-10-31T00:50:43.498327036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:43.498693 containerd[1589]: time="2025-10-31T00:50:43.498573873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:43.498693 containerd[1589]: time="2025-10-31T00:50:43.498468452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:50:43.498693 containerd[1589]: time="2025-10-31T00:50:43.498507465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:50:43.498693 containerd[1589]: time="2025-10-31T00:50:43.498523213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:43.498693 containerd[1589]: time="2025-10-31T00:50:43.498602231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:43.558975 containerd[1589]: time="2025-10-31T00:50:43.558917652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e041f2113a856fa66fcc0021a42bd5d5dbf7340b0b40802398f839059c1010b\"" Oct 31 00:50:43.562042 kubelet[2304]: E1031 00:50:43.562022 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:43.565499 containerd[1589]: time="2025-10-31T00:50:43.565470359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:327c0a53517aefe84db56d87af9081f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c006bbee8b112d445a2861da81c407770f5b109d6533afa24bd2d91131c30a08\"" Oct 31 00:50:43.566118 containerd[1589]: time="2025-10-31T00:50:43.566099775Z" level=info msg="CreateContainer within sandbox \"8e041f2113a856fa66fcc0021a42bd5d5dbf7340b0b40802398f839059c1010b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:50:43.566396 containerd[1589]: time="2025-10-31T00:50:43.566297576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"245b1da399c3077839ec628234d4004574a81d66de8cd0aeecdde3a2120872df\"" Oct 31 00:50:43.566528 kubelet[2304]: E1031 00:50:43.566484 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:43.567199 kubelet[2304]: E1031 00:50:43.567186 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:43.568037 containerd[1589]: time="2025-10-31T00:50:43.567975925Z" level=info msg="CreateContainer within sandbox \"c006bbee8b112d445a2861da81c407770f5b109d6533afa24bd2d91131c30a08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:50:43.569175 containerd[1589]: time="2025-10-31T00:50:43.569144563Z" level=info msg="CreateContainer within sandbox \"245b1da399c3077839ec628234d4004574a81d66de8cd0aeecdde3a2120872df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:50:43.590516 containerd[1589]: time="2025-10-31T00:50:43.590419428Z" level=info msg="CreateContainer within sandbox \"8e041f2113a856fa66fcc0021a42bd5d5dbf7340b0b40802398f839059c1010b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"670ee01e19ddbd5959732dd5a423c1f16e4abcd1be7a48fdc67af4d5f8f65265\"" Oct 31 00:50:43.591148 containerd[1589]: time="2025-10-31T00:50:43.590959812Z" level=info msg="StartContainer for \"670ee01e19ddbd5959732dd5a423c1f16e4abcd1be7a48fdc67af4d5f8f65265\"" Oct 31 00:50:43.595031 containerd[1589]: time="2025-10-31T00:50:43.594971098Z" level=info msg="CreateContainer within sandbox \"c006bbee8b112d445a2861da81c407770f5b109d6533afa24bd2d91131c30a08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c640c65fad50d29894aa68b1e2a80e5b95aeec3dce0231be63abdd9ac0397fe3\"" Oct 31 00:50:43.595362 containerd[1589]: time="2025-10-31T00:50:43.595326022Z" level=info msg="StartContainer for \"c640c65fad50d29894aa68b1e2a80e5b95aeec3dce0231be63abdd9ac0397fe3\"" Oct 31 00:50:43.596453 containerd[1589]: time="2025-10-31T00:50:43.596419050Z" level=info msg="CreateContainer within sandbox \"245b1da399c3077839ec628234d4004574a81d66de8cd0aeecdde3a2120872df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"282e7f2559503705725e78de7f0aadeafd1c5bb55df63084b5d1d4679b808f48\"" Oct 31 00:50:43.596834 containerd[1589]: time="2025-10-31T00:50:43.596791787Z" level=info msg="StartContainer for \"282e7f2559503705725e78de7f0aadeafd1c5bb55df63084b5d1d4679b808f48\"" Oct 31 00:50:43.664956 containerd[1589]: time="2025-10-31T00:50:43.664916682Z" level=info msg="StartContainer for \"670ee01e19ddbd5959732dd5a423c1f16e4abcd1be7a48fdc67af4d5f8f65265\" returns successfully" Oct 31 00:50:43.671434 containerd[1589]: time="2025-10-31T00:50:43.670334020Z" level=info msg="StartContainer for \"c640c65fad50d29894aa68b1e2a80e5b95aeec3dce0231be63abdd9ac0397fe3\" returns successfully" Oct 31 00:50:43.681428 containerd[1589]: time="2025-10-31T00:50:43.681405540Z" level=info msg="StartContainer for \"282e7f2559503705725e78de7f0aadeafd1c5bb55df63084b5d1d4679b808f48\" returns successfully" Oct 31 00:50:43.737849 kubelet[2304]: E1031 00:50:43.737814 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:43.738360 kubelet[2304]: E1031 00:50:43.738347 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:43.740558 kubelet[2304]: E1031 00:50:43.740533 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:43.740722 kubelet[2304]: E1031 00:50:43.740710 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:43.742811 kubelet[2304]: E1031 00:50:43.742798 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:43.742953 kubelet[2304]: E1031 00:50:43.742942 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:43.878380 kubelet[2304]: I1031 00:50:43.877880 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:50:44.717426 kubelet[2304]: E1031 00:50:44.717386 2304 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 00:50:44.745696 kubelet[2304]: E1031 00:50:44.745661 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:44.746484 kubelet[2304]: E1031 00:50:44.745802 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:44.746484 kubelet[2304]: E1031 00:50:44.745883 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:50:44.746484 kubelet[2304]: E1031 00:50:44.746020 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:44.782811 kubelet[2304]: I1031 00:50:44.782772 2304 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:50:44.782811 kubelet[2304]: E1031 00:50:44.782818 2304 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:50:44.796522 kubelet[2304]: E1031 00:50:44.796487 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:44.896802 kubelet[2304]: E1031 00:50:44.896755 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:44.997481 kubelet[2304]: E1031 00:50:44.997285 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:45.108922 kubelet[2304]: I1031 00:50:45.108875 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:50:45.113511 kubelet[2304]: E1031 00:50:45.113471 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 00:50:45.113511 kubelet[2304]: I1031 00:50:45.113493 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:45.114575 kubelet[2304]: E1031 00:50:45.114550 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:45.114575 kubelet[2304]: I1031 00:50:45.114566 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:45.115863 kubelet[2304]: E1031 00:50:45.115840 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:45.592849 kubelet[2304]: I1031 00:50:45.592806 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:45.594425 kubelet[2304]: E1031 00:50:45.594408 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:45.594559 kubelet[2304]: E1031 00:50:45.594545 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:45.695480 kubelet[2304]: I1031 00:50:45.695445 2304 apiserver.go:52] "Watching apiserver" Oct 31 00:50:45.708500 kubelet[2304]: I1031 00:50:45.708472 2304 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:50:47.000311 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-7.scope)... Oct 31 00:50:47.000328 systemd[1]: Reloading... Oct 31 00:50:47.097471 zram_generator::config[2622]: No configuration found. Oct 31 00:50:47.213470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:50:47.298346 systemd[1]: Reloading finished in 297 ms. Oct 31 00:50:47.335912 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:47.362670 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:50:47.363065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:47.372571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:47.534413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:47.548798 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:50:47.590396 kubelet[2673]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:50:47.590396 kubelet[2673]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:50:47.590396 kubelet[2673]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:50:47.590396 kubelet[2673]: I1031 00:50:47.590050 2673 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:50:47.604815 kubelet[2673]: I1031 00:50:47.602652 2673 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:50:47.604815 kubelet[2673]: I1031 00:50:47.602687 2673 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:50:47.604815 kubelet[2673]: I1031 00:50:47.602955 2673 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:50:47.604815 kubelet[2673]: I1031 00:50:47.604534 2673 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 00:50:47.607308 kubelet[2673]: I1031 00:50:47.607277 2673 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:50:47.615164 kubelet[2673]: E1031 00:50:47.615119 2673 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:50:47.615164 kubelet[2673]: I1031 00:50:47.615152 2673 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:50:47.620520 kubelet[2673]: I1031 00:50:47.620499 2673 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:50:47.621681 kubelet[2673]: I1031 00:50:47.621644 2673 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:50:47.621833 kubelet[2673]: I1031 00:50:47.621676 2673 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 00:50:47.621921 kubelet[2673]: I1031 00:50:47.621837 2673 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:50:47.621921 kubelet[2673]: I1031 00:50:47.621847 2673 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:50:47.621921 kubelet[2673]: I1031 00:50:47.621902 2673 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:50:47.622423 kubelet[2673]: I1031 00:50:47.622051 2673 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:50:47.622423 kubelet[2673]: I1031 00:50:47.622078 2673 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:50:47.622423 kubelet[2673]: I1031 00:50:47.622100 2673 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:50:47.622423 kubelet[2673]: I1031 00:50:47.622109 2673 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:50:47.623004 kubelet[2673]: I1031 00:50:47.622974 2673 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:50:47.623430 kubelet[2673]: I1031 00:50:47.623413 2673 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:50:47.623874 kubelet[2673]: I1031 00:50:47.623856 2673 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:50:47.623909 kubelet[2673]: I1031 00:50:47.623888 2673 server.go:1287] "Started kubelet" Oct 31 00:50:47.625495 kubelet[2673]: I1031 00:50:47.624590 2673 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:50:47.626467 kubelet[2673]: I1031 00:50:47.625589 2673 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:50:47.626467 kubelet[2673]: I1031 00:50:47.625864 2673 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:50:47.626467 kubelet[2673]: I1031 00:50:47.626270 2673 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:50:47.627953 kubelet[2673]: I1031 00:50:47.627928 2673 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:50:47.628965 kubelet[2673]: I1031 00:50:47.628113 2673 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:50:47.630019 kubelet[2673]: I1031 00:50:47.629993 2673 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:50:47.630086 kubelet[2673]: I1031 00:50:47.630073 2673 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:50:47.630198 kubelet[2673]: I1031 00:50:47.630177 2673 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:50:47.631934 kubelet[2673]: I1031 00:50:47.631811 2673 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:50:47.631934 kubelet[2673]: I1031 00:50:47.631895 2673 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:50:47.633408 kubelet[2673]: E1031 00:50:47.632748 2673 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:50:47.633408 kubelet[2673]: E1031 00:50:47.633086 2673 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:50:47.633408 kubelet[2673]: I1031 00:50:47.633306 2673 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:50:47.648122 kubelet[2673]: I1031 00:50:47.648091 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:50:47.649834 kubelet[2673]: I1031 00:50:47.649821 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:50:47.649913 kubelet[2673]: I1031 00:50:47.649904 2673 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:50:47.649969 kubelet[2673]: I1031 00:50:47.649960 2673 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:50:47.650029 kubelet[2673]: I1031 00:50:47.650021 2673 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:50:47.650126 kubelet[2673]: E1031 00:50:47.650107 2673 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:50:47.685056 kubelet[2673]: I1031 00:50:47.685028 2673 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:50:47.685237 kubelet[2673]: I1031 00:50:47.685201 2673 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:50:47.685237 kubelet[2673]: I1031 00:50:47.685226 2673 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:50:47.685413 kubelet[2673]: I1031 00:50:47.685398 2673 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:50:47.685437 kubelet[2673]: I1031 00:50:47.685411 2673 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:50:47.685437 kubelet[2673]: I1031 00:50:47.685430 2673 policy_none.go:49] "None policy: Start" Oct 31 00:50:47.685475 kubelet[2673]: I1031 00:50:47.685440 2673 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:50:47.685475 kubelet[2673]: I1031 00:50:47.685450 2673 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:50:47.685555 kubelet[2673]: I1031 00:50:47.685542 2673 state_mem.go:75] "Updated machine memory state" Oct 31 00:50:47.687074 kubelet[2673]: I1031 00:50:47.687047 2673 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:50:47.687258 kubelet[2673]: I1031 00:50:47.687224 2673 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:50:47.687282 kubelet[2673]: I1031 00:50:47.687248 2673 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:50:47.688081 kubelet[2673]: I1031 00:50:47.688056 2673 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:50:47.689089 kubelet[2673]: E1031 00:50:47.688947 2673 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:50:47.751325 kubelet[2673]: I1031 00:50:47.751261 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:47.751524 kubelet[2673]: I1031 00:50:47.751393 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:47.751524 kubelet[2673]: I1031 00:50:47.751455 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:50:47.792031 kubelet[2673]: I1031 00:50:47.791999 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:50:47.797679 kubelet[2673]: I1031 00:50:47.797640 2673 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:50:47.797852 kubelet[2673]: I1031 00:50:47.797713 2673 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:50:47.931026 kubelet[2673]: I1031 00:50:47.930901 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/327c0a53517aefe84db56d87af9081f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"327c0a53517aefe84db56d87af9081f7\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:47.931026 kubelet[2673]: I1031 00:50:47.930947 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/327c0a53517aefe84db56d87af9081f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"327c0a53517aefe84db56d87af9081f7\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:47.931026 kubelet[2673]: I1031 00:50:47.930977 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:47.931026 kubelet[2673]: I1031 00:50:47.931002 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:47.931199 kubelet[2673]: I1031 00:50:47.931052 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:50:47.931199 kubelet[2673]: I1031 00:50:47.931090 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/327c0a53517aefe84db56d87af9081f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"327c0a53517aefe84db56d87af9081f7\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:47.931199 kubelet[2673]: I1031 00:50:47.931110 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:47.931199 kubelet[2673]: I1031 00:50:47.931130 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:47.931199 kubelet[2673]: I1031 00:50:47.931151 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:50:47.997426 sudo[2707]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 31 00:50:47.997786 sudo[2707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 31 00:50:48.057522 kubelet[2673]: E1031 00:50:48.057477 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:48.059186 kubelet[2673]: E1031 00:50:48.059151 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:48.059306 kubelet[2673]: E1031 00:50:48.059203 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:48.481829 sudo[2707]: pam_unix(sudo:session): session closed for user root Oct 31 00:50:48.623978 kubelet[2673]: I1031 00:50:48.623930 2673 apiserver.go:52] "Watching apiserver" Oct 31 00:50:48.630771 kubelet[2673]: I1031 00:50:48.630732 2673 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:50:48.661100 kubelet[2673]: I1031 00:50:48.661061 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:48.661100 kubelet[2673]: E1031 00:50:48.661106 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:48.663396 kubelet[2673]: E1031 00:50:48.663358 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:48.665338 kubelet[2673]: I1031 00:50:48.665298 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6651603449999999 podStartE2EDuration="1.665160345s" podCreationTimestamp="2025-10-31 00:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:50:48.665078764 +0000 UTC m=+1.112422041" watchObservedRunningTime="2025-10-31 00:50:48.665160345 +0000 UTC m=+1.112503622" Oct 31 00:50:48.667976 kubelet[2673]: E1031 00:50:48.667934 2673 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:50:48.668227 kubelet[2673]: E1031 00:50:48.668077 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:48.678666 kubelet[2673]: I1031 00:50:48.678596 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.678579881 podStartE2EDuration="1.678579881s" podCreationTimestamp="2025-10-31 00:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:50:48.672962338 +0000 UTC m=+1.120305615" watchObservedRunningTime="2025-10-31 00:50:48.678579881 +0000 UTC m=+1.125923158" Oct 31 00:50:48.686405 kubelet[2673]: I1031 00:50:48.686029 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.686020193 podStartE2EDuration="1.686020193s" podCreationTimestamp="2025-10-31 00:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:50:48.678667996 +0000 UTC m=+1.126011273" watchObservedRunningTime="2025-10-31 00:50:48.686020193 +0000 UTC m=+1.133363460" Oct 31 00:50:49.662706 kubelet[2673]: E1031 00:50:49.662660 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:49.663222 kubelet[2673]: E1031 00:50:49.662769 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:49.817840 sudo[1781]: pam_unix(sudo:session): session closed for user root Oct 31 00:50:49.820004 sshd[1774]: pam_unix(sshd:session): session closed for user core Oct 31 00:50:49.823098 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:55644.service: Deactivated successfully. Oct 31 00:50:49.826766 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:50:49.827265 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:50:49.828465 systemd-logind[1562]: Removed session 7. Oct 31 00:50:50.663756 kubelet[2673]: E1031 00:50:50.663719 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:53.720707 kubelet[2673]: I1031 00:50:53.720669 2673 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:50:53.721312 containerd[1589]: time="2025-10-31T00:50:53.721063140Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:50:53.721660 kubelet[2673]: I1031 00:50:53.721308 2673 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:50:53.732173 kubelet[2673]: E1031 00:50:53.732140 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:54.689916 kubelet[2673]: I1031 00:50:54.689870 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a509c5ce-883a-4ce8-ad78-89c82b371ec5-kube-proxy\") pod \"kube-proxy-5qfvh\" (UID: \"a509c5ce-883a-4ce8-ad78-89c82b371ec5\") " pod="kube-system/kube-proxy-5qfvh" Oct 31 00:50:54.689916 kubelet[2673]: I1031 00:50:54.689913 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a509c5ce-883a-4ce8-ad78-89c82b371ec5-xtables-lock\") pod \"kube-proxy-5qfvh\" (UID: \"a509c5ce-883a-4ce8-ad78-89c82b371ec5\") " pod="kube-system/kube-proxy-5qfvh" Oct 31 00:50:54.690134 kubelet[2673]: I1031 00:50:54.689936 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a509c5ce-883a-4ce8-ad78-89c82b371ec5-lib-modules\") pod \"kube-proxy-5qfvh\" (UID: \"a509c5ce-883a-4ce8-ad78-89c82b371ec5\") " pod="kube-system/kube-proxy-5qfvh" Oct 31 00:50:54.690134 kubelet[2673]: I1031 00:50:54.689956 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ptr\" (UniqueName: \"kubernetes.io/projected/a509c5ce-883a-4ce8-ad78-89c82b371ec5-kube-api-access-r9ptr\") pod \"kube-proxy-5qfvh\" (UID: \"a509c5ce-883a-4ce8-ad78-89c82b371ec5\") " pod="kube-system/kube-proxy-5qfvh" Oct 31 00:50:54.694576 kubelet[2673]: E1031 00:50:54.694526 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:54.790479 kubelet[2673]: I1031 00:50:54.790413 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-run\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.790479 kubelet[2673]: I1031 00:50:54.790461 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-hostproc\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.790479 kubelet[2673]: I1031 00:50:54.790479 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-lib-modules\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.790479 kubelet[2673]: I1031 00:50:54.790498 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-xtables-lock\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791193 kubelet[2673]: I1031 00:50:54.790515 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-net\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791193 kubelet[2673]: I1031 00:50:54.790535 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8b9\" (UniqueName: \"kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-kube-api-access-vq8b9\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791193 kubelet[2673]: I1031 00:50:54.790568 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-cgroup\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791193 kubelet[2673]: I1031 00:50:54.790586 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cni-path\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791193 kubelet[2673]: I1031 00:50:54.790609 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-hubble-tls\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791193 kubelet[2673]: I1031 00:50:54.790647 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-etc-cni-netd\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791415 kubelet[2673]: I1031 00:50:54.790665 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7740736a-761c-4d0a-8de4-75a9b1d133a8-clustermesh-secrets\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791415 kubelet[2673]: I1031 00:50:54.790681 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-config-path\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791415 kubelet[2673]: I1031 00:50:54.790696 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-kernel\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.791415 kubelet[2673]: I1031 00:50:54.790712 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-bpf-maps\") pod \"cilium-t7nv5\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " pod="kube-system/cilium-t7nv5" Oct 31 00:50:54.978941 kubelet[2673]: E1031 00:50:54.978908 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:54.979549 containerd[1589]: time="2025-10-31T00:50:54.979503524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qfvh,Uid:a509c5ce-883a-4ce8-ad78-89c82b371ec5,Namespace:kube-system,Attempt:0,}" Oct 31 00:50:54.991897 kubelet[2673]: I1031 00:50:54.991854 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcgsc\" (UniqueName: \"kubernetes.io/projected/349fc37b-e73e-4ddb-b11e-f39a21204d00-kube-api-access-vcgsc\") pod \"cilium-operator-6c4d7847fc-7hcls\" (UID: \"349fc37b-e73e-4ddb-b11e-f39a21204d00\") " pod="kube-system/cilium-operator-6c4d7847fc-7hcls" Oct 31 00:50:54.991897 kubelet[2673]: I1031 00:50:54.991900 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349fc37b-e73e-4ddb-b11e-f39a21204d00-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7hcls\" (UID: \"349fc37b-e73e-4ddb-b11e-f39a21204d00\") " pod="kube-system/cilium-operator-6c4d7847fc-7hcls" Oct 31 00:50:55.081539 containerd[1589]: time="2025-10-31T00:50:55.080795469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:50:55.081539 containerd[1589]: time="2025-10-31T00:50:55.081485231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:50:55.081539 containerd[1589]: time="2025-10-31T00:50:55.081505935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:55.081712 containerd[1589]: time="2025-10-31T00:50:55.081634326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:55.120980 containerd[1589]: time="2025-10-31T00:50:55.120940660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qfvh,Uid:a509c5ce-883a-4ce8-ad78-89c82b371ec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"148e5a31c80f72971622c1c439e7a43e76092cfd60226c6ab5f9c3e209b923e2\"" Oct 31 00:50:55.121811 kubelet[2673]: E1031 00:50:55.121788 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:55.123864 containerd[1589]: time="2025-10-31T00:50:55.123823201Z" level=info msg="CreateContainer within sandbox \"148e5a31c80f72971622c1c439e7a43e76092cfd60226c6ab5f9c3e209b923e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:50:55.139817 containerd[1589]: time="2025-10-31T00:50:55.139781576Z" level=info msg="CreateContainer within sandbox \"148e5a31c80f72971622c1c439e7a43e76092cfd60226c6ab5f9c3e209b923e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b844d793a79812c26c9f4c0aa294c83cdc1af008e8da6ca17520df8e0b82c50a\"" Oct 31 00:50:55.140524 containerd[1589]: time="2025-10-31T00:50:55.140477953Z" level=info msg="StartContainer for \"b844d793a79812c26c9f4c0aa294c83cdc1af008e8da6ca17520df8e0b82c50a\"" Oct 31 00:50:55.201491 containerd[1589]: time="2025-10-31T00:50:55.201447137Z" level=info msg="StartContainer for \"b844d793a79812c26c9f4c0aa294c83cdc1af008e8da6ca17520df8e0b82c50a\" returns successfully" Oct 31 00:50:55.208180 kubelet[2673]: E1031 00:50:55.208153 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:55.208898 containerd[1589]: time="2025-10-31T00:50:55.208806298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7hcls,Uid:349fc37b-e73e-4ddb-b11e-f39a21204d00,Namespace:kube-system,Attempt:0,}" Oct 31 00:50:55.241068 containerd[1589]: time="2025-10-31T00:50:55.240807366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:50:55.242033 containerd[1589]: time="2025-10-31T00:50:55.240976352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:50:55.242033 containerd[1589]: time="2025-10-31T00:50:55.241878261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:55.242103 containerd[1589]: time="2025-10-31T00:50:55.241984134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:55.290551 kubelet[2673]: E1031 00:50:55.290513 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:55.291139 containerd[1589]: time="2025-10-31T00:50:55.291098528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7nv5,Uid:7740736a-761c-4d0a-8de4-75a9b1d133a8,Namespace:kube-system,Attempt:0,}" Oct 31 00:50:55.303171 containerd[1589]: time="2025-10-31T00:50:55.303128217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7hcls,Uid:349fc37b-e73e-4ddb-b11e-f39a21204d00,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b\"" Oct 31 00:50:55.304420 kubelet[2673]: E1031 00:50:55.304385 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:55.307649 containerd[1589]: time="2025-10-31T00:50:55.307608605Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 31 00:50:55.323700 containerd[1589]: time="2025-10-31T00:50:55.322804413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:50:55.323863 containerd[1589]: time="2025-10-31T00:50:55.322998251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:50:55.323863 containerd[1589]: time="2025-10-31T00:50:55.323606131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:55.323929 containerd[1589]: time="2025-10-31T00:50:55.323873705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:50:55.365828 containerd[1589]: time="2025-10-31T00:50:55.365782281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7nv5,Uid:7740736a-761c-4d0a-8de4-75a9b1d133a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\"" Oct 31 00:50:55.366892 kubelet[2673]: E1031 00:50:55.366711 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:55.698462 kubelet[2673]: E1031 00:50:55.698329 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:55.956842 kubelet[2673]: I1031 00:50:55.956759 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5qfvh" podStartSLOduration=1.956738383 podStartE2EDuration="1.956738383s" podCreationTimestamp="2025-10-31 00:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:50:55.956534784 +0000 UTC m=+8.403878061" watchObservedRunningTime="2025-10-31 00:50:55.956738383 +0000 UTC m=+8.404081660" Oct 31 00:50:56.788151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696017820.mount: Deactivated successfully. Oct 31 00:50:57.149103 containerd[1589]: time="2025-10-31T00:50:57.148973375Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:57.149736 containerd[1589]: time="2025-10-31T00:50:57.149698015Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Oct 31 00:50:57.150764 containerd[1589]: time="2025-10-31T00:50:57.150719602Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:50:57.152046 containerd[1589]: time="2025-10-31T00:50:57.152000491Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.844344375s" Oct 31 00:50:57.152046 containerd[1589]: time="2025-10-31T00:50:57.152043831Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 31 00:50:57.154185 containerd[1589]: time="2025-10-31T00:50:57.153732909Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 31 00:50:57.154542 containerd[1589]: time="2025-10-31T00:50:57.154497191Z" level=info msg="CreateContainer within sandbox \"5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 31 00:50:57.166520 containerd[1589]: time="2025-10-31T00:50:57.166478939Z" level=info msg="CreateContainer within sandbox \"5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\"" Oct 31 00:50:57.166787 containerd[1589]: time="2025-10-31T00:50:57.166763311Z" level=info msg="StartContainer for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\"" Oct 31 00:50:57.217648 containerd[1589]: time="2025-10-31T00:50:57.217605706Z" level=info msg="StartContainer for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" returns successfully" Oct 31 00:50:57.703712 kubelet[2673]: E1031 00:50:57.703663 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:58.474598 kubelet[2673]: E1031 00:50:58.474252 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:58.482497 kubelet[2673]: I1031 00:50:58.482440 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7hcls" podStartSLOduration=2.634535975 podStartE2EDuration="4.482423138s" podCreationTimestamp="2025-10-31 00:50:54 +0000 UTC" firstStartedPulling="2025-10-31 00:50:55.305188657 +0000 UTC m=+7.752531934" lastFinishedPulling="2025-10-31 00:50:57.15307582 +0000 UTC m=+9.600419097" observedRunningTime="2025-10-31 00:50:57.97823391 +0000 UTC m=+10.425577187" watchObservedRunningTime="2025-10-31 00:50:58.482423138 +0000 UTC m=+10.929766416" Oct 31 00:50:58.708459 kubelet[2673]: E1031 00:50:58.708424 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:58.708459 kubelet[2673]: E1031 00:50:58.708424 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:59.499024 kubelet[2673]: E1031 00:50:59.498976 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:50:59.833243 update_engine[1565]: I20251031 00:50:59.833046 1565 update_attempter.cc:509] Updating boot flags... Oct 31 00:51:00.022269 kubelet[2673]: E1031 00:51:00.022212 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:00.158477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3101) Oct 31 00:51:00.200204 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3103) Oct 31 00:51:01.700656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000008440.mount: Deactivated successfully. Oct 31 00:51:01.724432 systemd-journald[1152]: Under memory pressure, flushing caches. Oct 31 00:51:01.722559 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 31 00:51:01.722614 systemd-resolved[1467]: Flushed all caches. Oct 31 00:51:03.772390 systemd-journald[1152]: Under memory pressure, flushing caches. Oct 31 00:51:03.770473 systemd-resolved[1467]: Under memory pressure, flushing caches. Oct 31 00:51:03.770483 systemd-resolved[1467]: Flushed all caches. Oct 31 00:51:05.633060 containerd[1589]: time="2025-10-31T00:51:05.632992888Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:05.633774 containerd[1589]: time="2025-10-31T00:51:05.633700725Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Oct 31 00:51:05.634904 containerd[1589]: time="2025-10-31T00:51:05.634869619Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:05.636281 containerd[1589]: time="2025-10-31T00:51:05.636247535Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.482478561s" Oct 31 00:51:05.636326 containerd[1589]: time="2025-10-31T00:51:05.636280572Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 31 00:51:05.639768 containerd[1589]: time="2025-10-31T00:51:05.639738909Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:51:05.652820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740992086.mount: Deactivated successfully. Oct 31 00:51:05.655823 containerd[1589]: time="2025-10-31T00:51:05.655755395Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\"" Oct 31 00:51:05.656397 containerd[1589]: time="2025-10-31T00:51:05.656352870Z" level=info msg="StartContainer for \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\"" Oct 31 00:51:05.716045 containerd[1589]: time="2025-10-31T00:51:05.716000293Z" level=info msg="StartContainer for \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\" returns successfully" Oct 31 00:51:05.775205 containerd[1589]: time="2025-10-31T00:51:05.772829349Z" level=info msg="shim disconnected" id=7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3 namespace=k8s.io Oct 31 00:51:05.775205 containerd[1589]: time="2025-10-31T00:51:05.775187740Z" level=warning msg="cleaning up after shim disconnected" id=7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3 namespace=k8s.io Oct 31 00:51:05.775205 containerd[1589]: time="2025-10-31T00:51:05.775201308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:51:06.027393 kubelet[2673]: E1031 00:51:06.025540 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:06.033327 containerd[1589]: time="2025-10-31T00:51:06.033279700Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 31 00:51:06.063742 containerd[1589]: time="2025-10-31T00:51:06.063682286Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\"" Oct 31 00:51:06.064445 containerd[1589]: time="2025-10-31T00:51:06.064417182Z" level=info msg="StartContainer for \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\"" Oct 31 00:51:06.139258 containerd[1589]: time="2025-10-31T00:51:06.139208541Z" level=info msg="StartContainer for \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\" returns successfully" Oct 31 00:51:06.150606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:51:06.151404 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:51:06.151554 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:51:06.159755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:51:06.178552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:51:06.179711 containerd[1589]: time="2025-10-31T00:51:06.179653904Z" level=info msg="shim disconnected" id=2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3 namespace=k8s.io Oct 31 00:51:06.179809 containerd[1589]: time="2025-10-31T00:51:06.179713133Z" level=warning msg="cleaning up after shim disconnected" id=2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3 namespace=k8s.io Oct 31 00:51:06.179809 containerd[1589]: time="2025-10-31T00:51:06.179725719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:51:06.648991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3-rootfs.mount: Deactivated successfully. Oct 31 00:51:07.028846 kubelet[2673]: E1031 00:51:07.028796 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:07.031468 containerd[1589]: time="2025-10-31T00:51:07.031420260Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 31 00:51:07.049550 containerd[1589]: time="2025-10-31T00:51:07.049502706Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\"" Oct 31 00:51:07.050012 containerd[1589]: time="2025-10-31T00:51:07.049981996Z" level=info msg="StartContainer for \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\"" Oct 31 00:51:07.110927 containerd[1589]: time="2025-10-31T00:51:07.110890793Z" level=info msg="StartContainer for \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\" returns successfully" Oct 31 00:51:07.134210 containerd[1589]: time="2025-10-31T00:51:07.134140306Z" level=info msg="shim disconnected" id=b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18 namespace=k8s.io Oct 31 00:51:07.134210 containerd[1589]: time="2025-10-31T00:51:07.134208182Z" level=warning msg="cleaning up after shim disconnected" id=b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18 namespace=k8s.io Oct 31 00:51:07.134441 containerd[1589]: time="2025-10-31T00:51:07.134220336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:51:07.648293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18-rootfs.mount: Deactivated successfully. Oct 31 00:51:08.032037 kubelet[2673]: E1031 00:51:08.031987 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:08.034740 containerd[1589]: time="2025-10-31T00:51:08.034666846Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 31 00:51:08.297821 containerd[1589]: time="2025-10-31T00:51:08.297713753Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\"" Oct 31 00:51:08.299262 containerd[1589]: time="2025-10-31T00:51:08.299232215Z" level=info msg="StartContainer for \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\"" Oct 31 00:51:08.356188 containerd[1589]: time="2025-10-31T00:51:08.356142721Z" level=info msg="StartContainer for \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\" returns successfully" Oct 31 00:51:08.379510 containerd[1589]: time="2025-10-31T00:51:08.379455451Z" level=info msg="shim disconnected" id=480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916 namespace=k8s.io Oct 31 00:51:08.379510 containerd[1589]: time="2025-10-31T00:51:08.379505491Z" level=warning msg="cleaning up after shim disconnected" id=480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916 namespace=k8s.io Oct 31 00:51:08.379510 containerd[1589]: time="2025-10-31T00:51:08.379513698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:51:08.648283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916-rootfs.mount: Deactivated successfully. Oct 31 00:51:09.035211 kubelet[2673]: E1031 00:51:09.035186 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:09.037562 containerd[1589]: time="2025-10-31T00:51:09.036729737Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 31 00:51:09.052615 containerd[1589]: time="2025-10-31T00:51:09.052564151Z" level=info msg="CreateContainer within sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\"" Oct 31 00:51:09.053812 containerd[1589]: time="2025-10-31T00:51:09.053027213Z" level=info msg="StartContainer for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\"" Oct 31 00:51:09.102568 containerd[1589]: time="2025-10-31T00:51:09.102533436Z" level=info msg="StartContainer for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" returns successfully" Oct 31 00:51:09.208962 kubelet[2673]: I1031 00:51:09.208725 2673 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 00:51:09.239719 kubelet[2673]: W1031 00:51:09.239682 2673 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 31 00:51:09.239834 kubelet[2673]: E1031 00:51:09.239727 2673 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 31 00:51:09.239834 kubelet[2673]: I1031 00:51:09.239796 2673 status_manager.go:890] "Failed to get status for pod" podUID="761399ed-44ac-4ca9-bc3c-b54f23c0bcc9" pod="kube-system/coredns-668d6bf9bc-bvw7k" err="pods \"coredns-668d6bf9bc-bvw7k\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Oct 31 00:51:09.292086 kubelet[2673]: I1031 00:51:09.291980 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/761399ed-44ac-4ca9-bc3c-b54f23c0bcc9-config-volume\") pod \"coredns-668d6bf9bc-bvw7k\" (UID: \"761399ed-44ac-4ca9-bc3c-b54f23c0bcc9\") " pod="kube-system/coredns-668d6bf9bc-bvw7k" Oct 31 00:51:09.292086 kubelet[2673]: I1031 00:51:09.292032 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71f7e319-4b6c-4422-b260-0754c2c2c444-config-volume\") pod \"coredns-668d6bf9bc-hhrcd\" (UID: \"71f7e319-4b6c-4422-b260-0754c2c2c444\") " pod="kube-system/coredns-668d6bf9bc-hhrcd" Oct 31 00:51:09.292086 kubelet[2673]: I1031 00:51:09.292049 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfz8f\" (UniqueName: \"kubernetes.io/projected/761399ed-44ac-4ca9-bc3c-b54f23c0bcc9-kube-api-access-vfz8f\") pod \"coredns-668d6bf9bc-bvw7k\" (UID: \"761399ed-44ac-4ca9-bc3c-b54f23c0bcc9\") " pod="kube-system/coredns-668d6bf9bc-bvw7k" Oct 31 00:51:09.292086 kubelet[2673]: I1031 00:51:09.292065 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzmk\" (UniqueName: \"kubernetes.io/projected/71f7e319-4b6c-4422-b260-0754c2c2c444-kube-api-access-2rzmk\") pod \"coredns-668d6bf9bc-hhrcd\" (UID: \"71f7e319-4b6c-4422-b260-0754c2c2c444\") " pod="kube-system/coredns-668d6bf9bc-hhrcd" Oct 31 00:51:10.040105 kubelet[2673]: E1031 00:51:10.040051 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:10.055134 kubelet[2673]: I1031 00:51:10.055071 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t7nv5" podStartSLOduration=5.784352532 podStartE2EDuration="16.055054411s" podCreationTimestamp="2025-10-31 00:50:54 +0000 UTC" firstStartedPulling="2025-10-31 00:50:55.367113495 +0000 UTC m=+7.814456772" lastFinishedPulling="2025-10-31 00:51:05.637815374 +0000 UTC m=+18.085158651" observedRunningTime="2025-10-31 00:51:10.055001225 +0000 UTC m=+22.502344522" watchObservedRunningTime="2025-10-31 00:51:10.055054411 +0000 UTC m=+22.502397688" Oct 31 00:51:10.393655 kubelet[2673]: E1031 00:51:10.393512 2673 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:10.393655 kubelet[2673]: E1031 00:51:10.393611 2673 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/761399ed-44ac-4ca9-bc3c-b54f23c0bcc9-config-volume podName:761399ed-44ac-4ca9-bc3c-b54f23c0bcc9 nodeName:}" failed. No retries permitted until 2025-10-31 00:51:10.893589317 +0000 UTC m=+23.340932594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/761399ed-44ac-4ca9-bc3c-b54f23c0bcc9-config-volume") pod "coredns-668d6bf9bc-bvw7k" (UID: "761399ed-44ac-4ca9-bc3c-b54f23c0bcc9") : failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:10.393655 kubelet[2673]: E1031 00:51:10.393514 2673 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:10.393655 kubelet[2673]: E1031 00:51:10.393663 2673 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71f7e319-4b6c-4422-b260-0754c2c2c444-config-volume podName:71f7e319-4b6c-4422-b260-0754c2c2c444 nodeName:}" failed. No retries permitted until 2025-10-31 00:51:10.893652392 +0000 UTC m=+23.340995669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/71f7e319-4b6c-4422-b260-0754c2c2c444-config-volume") pod "coredns-668d6bf9bc-hhrcd" (UID: "71f7e319-4b6c-4422-b260-0754c2c2c444") : failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:11.041880 kubelet[2673]: E1031 00:51:11.041828 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:11.042345 kubelet[2673]: E1031 00:51:11.042018 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:11.042917 containerd[1589]: time="2025-10-31T00:51:11.042563233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bvw7k,Uid:761399ed-44ac-4ca9-bc3c-b54f23c0bcc9,Namespace:kube-system,Attempt:0,}" Oct 31 00:51:11.057278 kubelet[2673]: E1031 00:51:11.057240 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:11.057833 containerd[1589]: time="2025-10-31T00:51:11.057676750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hhrcd,Uid:71f7e319-4b6c-4422-b260-0754c2c2c444,Namespace:kube-system,Attempt:0,}" Oct 31 00:51:11.270783 systemd-networkd[1263]: cilium_host: Link UP Oct 31 00:51:11.273583 systemd-networkd[1263]: cilium_net: Link UP Oct 31 00:51:11.273791 systemd-networkd[1263]: cilium_net: Gained carrier Oct 31 00:51:11.273961 systemd-networkd[1263]: cilium_host: Gained carrier Oct 31 00:51:11.388155 systemd-networkd[1263]: cilium_vxlan: Link UP Oct 31 00:51:11.388166 systemd-networkd[1263]: cilium_vxlan: Gained carrier Oct 31 00:51:11.482545 systemd-networkd[1263]: cilium_net: Gained IPv6LL Oct 31 00:51:11.593408 kernel: NET: Registered PF_ALG protocol family Oct 31 00:51:12.043384 kubelet[2673]: E1031 00:51:12.043339 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:12.235814 systemd-networkd[1263]: lxc_health: Link UP Oct 31 00:51:12.252744 systemd-networkd[1263]: lxc_health: Gained carrier Oct 31 00:51:12.284466 systemd-networkd[1263]: cilium_host: Gained IPv6LL Oct 31 00:51:12.814530 systemd-networkd[1263]: lxcaa1c1689c3bb: Link UP Oct 31 00:51:12.825789 systemd-networkd[1263]: lxcdebbb168895e: Link UP Oct 31 00:51:12.893494 kernel: eth0: renamed from tmp4190a Oct 31 00:51:12.898867 systemd-networkd[1263]: lxcaa1c1689c3bb: Gained carrier Oct 31 00:51:12.899606 kernel: eth0: renamed from tmpeb3cf Oct 31 00:51:12.905978 systemd-networkd[1263]: lxcdebbb168895e: Gained carrier Oct 31 00:51:13.051505 systemd-networkd[1263]: cilium_vxlan: Gained IPv6LL Oct 31 00:51:13.260763 systemd[1]: Started sshd@7-10.0.0.160:22-10.0.0.1:57974.service - OpenSSH per-connection server daemon (10.0.0.1:57974). Oct 31 00:51:13.290944 sshd[3890]: Accepted publickey for core from 10.0.0.1 port 57974 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:13.292686 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:13.294419 kubelet[2673]: E1031 00:51:13.293619 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:13.297206 systemd-logind[1562]: New session 8 of user core. Oct 31 00:51:13.304632 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 00:51:13.555445 sshd[3890]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:13.558765 systemd[1]: sshd@7-10.0.0.160:22-10.0.0.1:57974.service: Deactivated successfully. Oct 31 00:51:13.562302 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:51:13.563318 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:51:13.563458 systemd-networkd[1263]: lxc_health: Gained IPv6LL Oct 31 00:51:13.565316 systemd-logind[1562]: Removed session 8. Oct 31 00:51:14.394574 systemd-networkd[1263]: lxcaa1c1689c3bb: Gained IPv6LL Oct 31 00:51:14.586535 systemd-networkd[1263]: lxcdebbb168895e: Gained IPv6LL Oct 31 00:51:15.449398 kubelet[2673]: I1031 00:51:15.449316 2673 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:51:15.450010 kubelet[2673]: E1031 00:51:15.449844 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:16.051397 kubelet[2673]: E1031 00:51:16.051353 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:16.290207 containerd[1589]: time="2025-10-31T00:51:16.289820920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:16.290207 containerd[1589]: time="2025-10-31T00:51:16.289882470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:16.290207 containerd[1589]: time="2025-10-31T00:51:16.289896778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:16.290207 containerd[1589]: time="2025-10-31T00:51:16.289991555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:16.300849 containerd[1589]: time="2025-10-31T00:51:16.300501657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:16.300849 containerd[1589]: time="2025-10-31T00:51:16.300560463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:16.300849 containerd[1589]: time="2025-10-31T00:51:16.300582696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:16.300849 containerd[1589]: time="2025-10-31T00:51:16.300727670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:16.333474 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:51:16.340852 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:51:16.360252 containerd[1589]: time="2025-10-31T00:51:16.360213196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hhrcd,Uid:71f7e319-4b6c-4422-b260-0754c2c2c444,Namespace:kube-system,Attempt:0,} returns sandbox id \"4190a6480c2836e6449659a78ae69977fd766e03fb60dc0ee7986168f2f3bc00\"" Oct 31 00:51:16.361025 kubelet[2673]: E1031 00:51:16.360911 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:16.363400 containerd[1589]: time="2025-10-31T00:51:16.363346927Z" level=info msg="CreateContainer within sandbox \"4190a6480c2836e6449659a78ae69977fd766e03fb60dc0ee7986168f2f3bc00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:51:16.372299 containerd[1589]: time="2025-10-31T00:51:16.372255726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bvw7k,Uid:761399ed-44ac-4ca9-bc3c-b54f23c0bcc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb3cf8e2c96d29570be2d6aac16a56452212b6db9327b49cd75eb37684829e8c\"" Oct 31 00:51:16.373298 kubelet[2673]: E1031 00:51:16.373269 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:16.374987 containerd[1589]: time="2025-10-31T00:51:16.374916379Z" level=info msg="CreateContainer within sandbox \"eb3cf8e2c96d29570be2d6aac16a56452212b6db9327b49cd75eb37684829e8c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:51:16.563886 containerd[1589]: time="2025-10-31T00:51:16.563838400Z" level=info msg="CreateContainer within sandbox \"eb3cf8e2c96d29570be2d6aac16a56452212b6db9327b49cd75eb37684829e8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2298c6b61a5b425d5be9a770e7ca1dd8d2dba2fb202ca4f2d7f0fc5be12b328\"" Oct 31 00:51:16.565304 containerd[1589]: time="2025-10-31T00:51:16.564433539Z" level=info msg="StartContainer for \"d2298c6b61a5b425d5be9a770e7ca1dd8d2dba2fb202ca4f2d7f0fc5be12b328\"" Oct 31 00:51:16.569310 containerd[1589]: time="2025-10-31T00:51:16.569273851Z" level=info msg="CreateContainer within sandbox \"4190a6480c2836e6449659a78ae69977fd766e03fb60dc0ee7986168f2f3bc00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12769d49e0a19e147b20acf53891c5a766e777543564ecc40984979cb0cf58f9\"" Oct 31 00:51:16.570393 containerd[1589]: time="2025-10-31T00:51:16.569690429Z" level=info msg="StartContainer for \"12769d49e0a19e147b20acf53891c5a766e777543564ecc40984979cb0cf58f9\"" Oct 31 00:51:16.624079 containerd[1589]: time="2025-10-31T00:51:16.623958582Z" level=info msg="StartContainer for \"d2298c6b61a5b425d5be9a770e7ca1dd8d2dba2fb202ca4f2d7f0fc5be12b328\" returns successfully" Oct 31 00:51:16.625601 containerd[1589]: time="2025-10-31T00:51:16.625513163Z" level=info msg="StartContainer for \"12769d49e0a19e147b20acf53891c5a766e777543564ecc40984979cb0cf58f9\" returns successfully" Oct 31 00:51:17.054557 kubelet[2673]: E1031 00:51:17.054415 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:17.056088 kubelet[2673]: E1031 00:51:17.055995 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:17.140912 kubelet[2673]: I1031 00:51:17.140851 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hhrcd" podStartSLOduration=23.140822086 podStartE2EDuration="23.140822086s" podCreationTimestamp="2025-10-31 00:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:51:17.140123367 +0000 UTC m=+29.587466644" watchObservedRunningTime="2025-10-31 00:51:17.140822086 +0000 UTC m=+29.588165363" Oct 31 00:51:17.297864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78311999.mount: Deactivated successfully. Oct 31 00:51:17.709400 kubelet[2673]: I1031 00:51:17.709319 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bvw7k" podStartSLOduration=23.709297656 podStartE2EDuration="23.709297656s" podCreationTimestamp="2025-10-31 00:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:51:17.576290966 +0000 UTC m=+30.023634243" watchObservedRunningTime="2025-10-31 00:51:17.709297656 +0000 UTC m=+30.156640933" Oct 31 00:51:18.057673 kubelet[2673]: E1031 00:51:18.057555 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:18.057673 kubelet[2673]: E1031 00:51:18.057597 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:18.570590 systemd[1]: Started sshd@8-10.0.0.160:22-10.0.0.1:57996.service - OpenSSH per-connection server daemon (10.0.0.1:57996). Oct 31 00:51:18.600314 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:18.601867 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:18.605924 systemd-logind[1562]: New session 9 of user core. Oct 31 00:51:18.616685 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 00:51:18.825573 sshd[4078]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:18.829695 systemd[1]: sshd@8-10.0.0.160:22-10.0.0.1:57996.service: Deactivated successfully. Oct 31 00:51:18.832069 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:51:18.832099 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:51:18.833146 systemd-logind[1562]: Removed session 9. Oct 31 00:51:19.059260 kubelet[2673]: E1031 00:51:19.059213 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:19.059260 kubelet[2673]: E1031 00:51:19.059218 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:51:23.840594 systemd[1]: Started sshd@9-10.0.0.160:22-10.0.0.1:52640.service - OpenSSH per-connection server daemon (10.0.0.1:52640). Oct 31 00:51:23.868435 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 52640 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:23.870134 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:23.874486 systemd-logind[1562]: New session 10 of user core. Oct 31 00:51:23.882620 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:51:24.004531 sshd[4098]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:24.008225 systemd[1]: sshd@9-10.0.0.160:22-10.0.0.1:52640.service: Deactivated successfully. Oct 31 00:51:24.010620 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:51:24.010694 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:51:24.011689 systemd-logind[1562]: Removed session 10. Oct 31 00:51:29.027618 systemd[1]: Started sshd@10-10.0.0.160:22-10.0.0.1:52652.service - OpenSSH per-connection server daemon (10.0.0.1:52652). Oct 31 00:51:29.057664 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 52652 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:29.059059 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:29.062878 systemd-logind[1562]: New session 11 of user core. Oct 31 00:51:29.081699 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:51:29.213426 sshd[4117]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:29.228637 systemd[1]: Started sshd@11-10.0.0.160:22-10.0.0.1:52658.service - OpenSSH per-connection server daemon (10.0.0.1:52658). Oct 31 00:51:29.229290 systemd[1]: sshd@10-10.0.0.160:22-10.0.0.1:52652.service: Deactivated successfully. Oct 31 00:51:29.231345 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:51:29.232912 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:51:29.233983 systemd-logind[1562]: Removed session 11. Oct 31 00:51:29.256401 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 52658 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:29.257795 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:29.261912 systemd-logind[1562]: New session 12 of user core. Oct 31 00:51:29.277610 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:51:29.429171 sshd[4131]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:29.437849 systemd[1]: Started sshd@12-10.0.0.160:22-10.0.0.1:52666.service - OpenSSH per-connection server daemon (10.0.0.1:52666). Oct 31 00:51:29.438572 systemd[1]: sshd@11-10.0.0.160:22-10.0.0.1:52658.service: Deactivated successfully. Oct 31 00:51:29.444600 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:51:29.447798 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:51:29.449116 systemd-logind[1562]: Removed session 12. Oct 31 00:51:29.477509 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 52666 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:29.479238 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:29.483280 systemd-logind[1562]: New session 13 of user core. Oct 31 00:51:29.492640 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:51:29.600075 sshd[4143]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:29.604526 systemd[1]: sshd@12-10.0.0.160:22-10.0.0.1:52666.service: Deactivated successfully. Oct 31 00:51:29.607323 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:51:29.607347 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:51:29.608500 systemd-logind[1562]: Removed session 13. Oct 31 00:51:34.616642 systemd[1]: Started sshd@13-10.0.0.160:22-10.0.0.1:60732.service - OpenSSH per-connection server daemon (10.0.0.1:60732). Oct 31 00:51:34.643629 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:34.645115 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:34.649150 systemd-logind[1562]: New session 14 of user core. Oct 31 00:51:34.658629 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:51:34.770625 sshd[4161]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:34.774820 systemd[1]: sshd@13-10.0.0.160:22-10.0.0.1:60732.service: Deactivated successfully. Oct 31 00:51:34.777266 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:51:34.777338 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:51:34.778723 systemd-logind[1562]: Removed session 14. Oct 31 00:51:39.782599 systemd[1]: Started sshd@14-10.0.0.160:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Oct 31 00:51:39.809037 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:39.810546 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:39.814239 systemd-logind[1562]: New session 15 of user core. Oct 31 00:51:39.825637 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:51:39.930769 sshd[4176]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:39.944599 systemd[1]: Started sshd@15-10.0.0.160:22-10.0.0.1:60768.service - OpenSSH per-connection server daemon (10.0.0.1:60768). Oct 31 00:51:39.945110 systemd[1]: sshd@14-10.0.0.160:22-10.0.0.1:60762.service: Deactivated successfully. Oct 31 00:51:39.946979 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:51:39.948687 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:51:39.949753 systemd-logind[1562]: Removed session 15. Oct 31 00:51:39.971659 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 60768 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:39.973128 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:39.976981 systemd-logind[1562]: New session 16 of user core. Oct 31 00:51:39.986628 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:51:40.257028 sshd[4188]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:40.264588 systemd[1]: Started sshd@16-10.0.0.160:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). Oct 31 00:51:40.265056 systemd[1]: sshd@15-10.0.0.160:22-10.0.0.1:60768.service: Deactivated successfully. Oct 31 00:51:40.269586 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:51:40.269813 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:51:40.270947 systemd-logind[1562]: Removed session 16. Oct 31 00:51:40.298467 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:40.300112 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:40.304221 systemd-logind[1562]: New session 17 of user core. Oct 31 00:51:40.312617 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:51:40.750627 sshd[4202]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:40.759978 systemd[1]: Started sshd@17-10.0.0.160:22-10.0.0.1:50866.service - OpenSSH per-connection server daemon (10.0.0.1:50866). Oct 31 00:51:40.760873 systemd[1]: sshd@16-10.0.0.160:22-10.0.0.1:50852.service: Deactivated successfully. Oct 31 00:51:40.767320 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:51:40.770524 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:51:40.773004 systemd-logind[1562]: Removed session 17. Oct 31 00:51:40.791195 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 50866 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:40.792723 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:40.796825 systemd-logind[1562]: New session 18 of user core. Oct 31 00:51:40.808641 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:51:41.064138 sshd[4221]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:41.076621 systemd[1]: Started sshd@18-10.0.0.160:22-10.0.0.1:50874.service - OpenSSH per-connection server daemon (10.0.0.1:50874). Oct 31 00:51:41.077112 systemd[1]: sshd@17-10.0.0.160:22-10.0.0.1:50866.service: Deactivated successfully. Oct 31 00:51:41.080618 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:51:41.081403 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:51:41.082662 systemd-logind[1562]: Removed session 18. Oct 31 00:51:41.106343 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 50874 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:41.107935 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:41.111797 systemd-logind[1562]: New session 19 of user core. Oct 31 00:51:41.118743 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:51:41.227487 sshd[4236]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:41.231405 systemd[1]: sshd@18-10.0.0.160:22-10.0.0.1:50874.service: Deactivated successfully. Oct 31 00:51:41.236721 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:51:41.237613 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:51:41.238618 systemd-logind[1562]: Removed session 19. Oct 31 00:51:46.240600 systemd[1]: Started sshd@19-10.0.0.160:22-10.0.0.1:50946.service - OpenSSH per-connection server daemon (10.0.0.1:50946). Oct 31 00:51:46.268487 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 50946 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:46.270146 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:46.273800 systemd-logind[1562]: New session 20 of user core. Oct 31 00:51:46.283632 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:51:46.388225 sshd[4254]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:46.392558 systemd[1]: sshd@19-10.0.0.160:22-10.0.0.1:50946.service: Deactivated successfully. Oct 31 00:51:46.394926 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:51:46.395159 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:51:46.396004 systemd-logind[1562]: Removed session 20. Oct 31 00:51:51.408722 systemd[1]: Started sshd@20-10.0.0.160:22-10.0.0.1:59426.service - OpenSSH per-connection server daemon (10.0.0.1:59426). Oct 31 00:51:51.435572 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 59426 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:51.437066 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:51.440978 systemd-logind[1562]: New session 21 of user core. Oct 31 00:51:51.450605 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:51:51.557681 sshd[4273]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:51.561872 systemd[1]: sshd@20-10.0.0.160:22-10.0.0.1:59426.service: Deactivated successfully. Oct 31 00:51:51.564431 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:51:51.564460 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:51:51.565680 systemd-logind[1562]: Removed session 21. Oct 31 00:51:56.565583 systemd[1]: Started sshd@21-10.0.0.160:22-10.0.0.1:59458.service - OpenSSH per-connection server daemon (10.0.0.1:59458). Oct 31 00:51:56.592574 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 59458 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:51:56.593989 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:56.597744 systemd-logind[1562]: New session 22 of user core. Oct 31 00:51:56.607616 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:51:56.709185 sshd[4290]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:56.713060 systemd[1]: sshd@21-10.0.0.160:22-10.0.0.1:59458.service: Deactivated successfully. Oct 31 00:51:56.715313 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:51:56.715346 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:51:56.716548 systemd-logind[1562]: Removed session 22. Oct 31 00:51:58.651582 kubelet[2673]: E1031 00:51:58.651546 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:01.718627 systemd[1]: Started sshd@22-10.0.0.160:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). Oct 31 00:52:01.745076 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:52:01.746516 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:01.750102 systemd-logind[1562]: New session 23 of user core. Oct 31 00:52:01.763614 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:52:01.865684 sshd[4305]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:01.879588 systemd[1]: Started sshd@23-10.0.0.160:22-10.0.0.1:56040.service - OpenSSH per-connection server daemon (10.0.0.1:56040). Oct 31 00:52:01.880102 systemd[1]: sshd@22-10.0.0.160:22-10.0.0.1:56028.service: Deactivated successfully. Oct 31 00:52:01.881977 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:52:01.883727 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:52:01.884766 systemd-logind[1562]: Removed session 23. Oct 31 00:52:01.905890 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 56040 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:52:01.907321 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:01.911088 systemd-logind[1562]: New session 24 of user core. Oct 31 00:52:01.918629 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:52:03.237168 containerd[1589]: time="2025-10-31T00:52:03.237133362Z" level=info msg="StopContainer for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" with timeout 30 (s)" Oct 31 00:52:03.238404 containerd[1589]: time="2025-10-31T00:52:03.238049296Z" level=info msg="Stop container \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" with signal terminated" Oct 31 00:52:03.277238 containerd[1589]: time="2025-10-31T00:52:03.277198926Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:52:03.285604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec-rootfs.mount: Deactivated successfully. Oct 31 00:52:03.287701 containerd[1589]: time="2025-10-31T00:52:03.287675082Z" level=info msg="StopContainer for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" with timeout 2 (s)" Oct 31 00:52:03.287903 containerd[1589]: time="2025-10-31T00:52:03.287885681Z" level=info msg="Stop container \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" with signal terminated" Oct 31 00:52:03.293548 systemd-networkd[1263]: lxc_health: Link DOWN Oct 31 00:52:03.293555 systemd-networkd[1263]: lxc_health: Lost carrier Oct 31 00:52:03.299356 containerd[1589]: time="2025-10-31T00:52:03.299307528Z" level=info msg="shim disconnected" id=681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec namespace=k8s.io Oct 31 00:52:03.299685 containerd[1589]: time="2025-10-31T00:52:03.299526933Z" level=warning msg="cleaning up after shim disconnected" id=681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec namespace=k8s.io Oct 31 00:52:03.299685 containerd[1589]: time="2025-10-31T00:52:03.299542534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:03.316523 containerd[1589]: time="2025-10-31T00:52:03.316482649Z" level=info msg="StopContainer for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" returns successfully" Oct 31 00:52:03.321524 containerd[1589]: time="2025-10-31T00:52:03.321484801Z" level=info msg="StopPodSandbox for \"5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b\"" Oct 31 00:52:03.321650 containerd[1589]: time="2025-10-31T00:52:03.321528343Z" level=info msg="Container to stop \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:52:03.326174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b-shm.mount: Deactivated successfully. Oct 31 00:52:03.342843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39-rootfs.mount: Deactivated successfully. Oct 31 00:52:03.345180 containerd[1589]: time="2025-10-31T00:52:03.345131145Z" level=info msg="shim disconnected" id=92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39 namespace=k8s.io Oct 31 00:52:03.345278 containerd[1589]: time="2025-10-31T00:52:03.345179537Z" level=warning msg="cleaning up after shim disconnected" id=92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39 namespace=k8s.io Oct 31 00:52:03.345278 containerd[1589]: time="2025-10-31T00:52:03.345189165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:03.355882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b-rootfs.mount: Deactivated successfully. Oct 31 00:52:03.360181 containerd[1589]: time="2025-10-31T00:52:03.360122180Z" level=info msg="shim disconnected" id=5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b namespace=k8s.io Oct 31 00:52:03.360181 containerd[1589]: time="2025-10-31T00:52:03.360171273Z" level=warning msg="cleaning up after shim disconnected" id=5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b namespace=k8s.io Oct 31 00:52:03.360181 containerd[1589]: time="2025-10-31T00:52:03.360180552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:03.363639 containerd[1589]: time="2025-10-31T00:52:03.363610616Z" level=info msg="StopContainer for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" returns successfully" Oct 31 00:52:03.364074 containerd[1589]: time="2025-10-31T00:52:03.364043987Z" level=info msg="StopPodSandbox for \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\"" Oct 31 00:52:03.364111 containerd[1589]: time="2025-10-31T00:52:03.364082469Z" level=info msg="Container to stop \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:52:03.364111 containerd[1589]: time="2025-10-31T00:52:03.364094452Z" level=info msg="Container to stop \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:52:03.364111 containerd[1589]: time="2025-10-31T00:52:03.364104040Z" level=info msg="Container to stop \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:52:03.364175 containerd[1589]: time="2025-10-31T00:52:03.364113709Z" level=info msg="Container to stop \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:52:03.364175 containerd[1589]: time="2025-10-31T00:52:03.364123337Z" level=info msg="Container to stop \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:52:03.375917 containerd[1589]: time="2025-10-31T00:52:03.375875510Z" level=info msg="TearDown network for sandbox \"5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b\" successfully" Oct 31 00:52:03.375917 containerd[1589]: time="2025-10-31T00:52:03.375901909Z" level=info msg="StopPodSandbox for \"5d79a01f7ce0bf8fc956cb6070efbad93608044b8a8646fe2cb21d53e02d622b\" returns successfully" Oct 31 00:52:03.397811 kubelet[2673]: I1031 00:52:03.397779 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcgsc\" (UniqueName: \"kubernetes.io/projected/349fc37b-e73e-4ddb-b11e-f39a21204d00-kube-api-access-vcgsc\") pod \"349fc37b-e73e-4ddb-b11e-f39a21204d00\" (UID: \"349fc37b-e73e-4ddb-b11e-f39a21204d00\") " Oct 31 00:52:03.397811 kubelet[2673]: I1031 00:52:03.397825 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349fc37b-e73e-4ddb-b11e-f39a21204d00-cilium-config-path\") pod \"349fc37b-e73e-4ddb-b11e-f39a21204d00\" (UID: \"349fc37b-e73e-4ddb-b11e-f39a21204d00\") " Oct 31 00:52:03.401763 kubelet[2673]: I1031 00:52:03.401542 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349fc37b-e73e-4ddb-b11e-f39a21204d00-kube-api-access-vcgsc" (OuterVolumeSpecName: "kube-api-access-vcgsc") pod "349fc37b-e73e-4ddb-b11e-f39a21204d00" (UID: "349fc37b-e73e-4ddb-b11e-f39a21204d00"). InnerVolumeSpecName "kube-api-access-vcgsc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:52:03.401849 kubelet[2673]: I1031 00:52:03.401810 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349fc37b-e73e-4ddb-b11e-f39a21204d00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "349fc37b-e73e-4ddb-b11e-f39a21204d00" (UID: "349fc37b-e73e-4ddb-b11e-f39a21204d00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:52:03.402582 containerd[1589]: time="2025-10-31T00:52:03.402534428Z" level=info msg="shim disconnected" id=0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546 namespace=k8s.io Oct 31 00:52:03.402672 containerd[1589]: time="2025-10-31T00:52:03.402585715Z" level=warning msg="cleaning up after shim disconnected" id=0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546 namespace=k8s.io Oct 31 00:52:03.402672 containerd[1589]: time="2025-10-31T00:52:03.402606415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:03.416906 containerd[1589]: time="2025-10-31T00:52:03.416853651Z" level=info msg="TearDown network for sandbox \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" successfully" Oct 31 00:52:03.416906 containerd[1589]: time="2025-10-31T00:52:03.416893356Z" level=info msg="StopPodSandbox for \"0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546\" returns successfully" Oct 31 00:52:03.498508 kubelet[2673]: I1031 00:52:03.498404 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.498508 kubelet[2673]: I1031 00:52:03.498427 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-etc-cni-netd\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499404 kubelet[2673]: I1031 00:52:03.499332 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq8b9\" (UniqueName: \"kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-kube-api-access-vq8b9\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499404 kubelet[2673]: I1031 00:52:03.499395 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-hubble-tls\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499404 kubelet[2673]: I1031 00:52:03.499413 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-cgroup\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499404 kubelet[2673]: I1031 00:52:03.499430 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7740736a-761c-4d0a-8de4-75a9b1d133a8-clustermesh-secrets\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499664 kubelet[2673]: I1031 00:52:03.499446 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-lib-modules\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499664 kubelet[2673]: I1031 00:52:03.499460 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-xtables-lock\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499664 kubelet[2673]: I1031 00:52:03.499477 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-config-path\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499664 kubelet[2673]: I1031 00:52:03.499490 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-hostproc\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499664 kubelet[2673]: I1031 00:52:03.499503 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-net\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499664 kubelet[2673]: I1031 00:52:03.499517 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-run\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499804 kubelet[2673]: I1031 00:52:03.499531 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-bpf-maps\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499804 kubelet[2673]: I1031 00:52:03.499544 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cni-path\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499804 kubelet[2673]: I1031 00:52:03.499560 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-kernel\") pod \"7740736a-761c-4d0a-8de4-75a9b1d133a8\" (UID: \"7740736a-761c-4d0a-8de4-75a9b1d133a8\") " Oct 31 00:52:03.499804 kubelet[2673]: I1031 00:52:03.499610 2673 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.499804 kubelet[2673]: I1031 00:52:03.499621 2673 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vcgsc\" (UniqueName: \"kubernetes.io/projected/349fc37b-e73e-4ddb-b11e-f39a21204d00-kube-api-access-vcgsc\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.499804 kubelet[2673]: I1031 00:52:03.499633 2673 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349fc37b-e73e-4ddb-b11e-f39a21204d00-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.499952 kubelet[2673]: I1031 00:52:03.499664 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503266 kubelet[2673]: I1031 00:52:03.501167 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503266 kubelet[2673]: I1031 00:52:03.501196 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503266 kubelet[2673]: I1031 00:52:03.501212 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503266 kubelet[2673]: I1031 00:52:03.501228 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503266 kubelet[2673]: I1031 00:52:03.501241 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503475 kubelet[2673]: I1031 00:52:03.501256 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503475 kubelet[2673]: I1031 00:52:03.501271 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503475 kubelet[2673]: I1031 00:52:03.501285 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:52:03.503475 kubelet[2673]: I1031 00:52:03.501870 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-kube-api-access-vq8b9" (OuterVolumeSpecName: "kube-api-access-vq8b9") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "kube-api-access-vq8b9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:52:03.503679 kubelet[2673]: I1031 00:52:03.503584 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:52:03.504279 kubelet[2673]: I1031 00:52:03.504194 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7740736a-761c-4d0a-8de4-75a9b1d133a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:52:03.504279 kubelet[2673]: I1031 00:52:03.504245 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7740736a-761c-4d0a-8de4-75a9b1d133a8" (UID: "7740736a-761c-4d0a-8de4-75a9b1d133a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:52:03.600565 kubelet[2673]: I1031 00:52:03.600534 2673 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vq8b9\" (UniqueName: \"kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-kube-api-access-vq8b9\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600565 kubelet[2673]: I1031 00:52:03.600558 2673 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7740736a-761c-4d0a-8de4-75a9b1d133a8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600575 2673 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600591 2673 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7740736a-761c-4d0a-8de4-75a9b1d133a8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600599 2673 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600607 2673 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600616 2673 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600623 2673 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600632 2673 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600653 kubelet[2673]: I1031 00:52:03.600641 2673 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600873 kubelet[2673]: I1031 00:52:03.600648 2673 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600873 kubelet[2673]: I1031 00:52:03.600656 2673 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:03.600873 kubelet[2673]: I1031 00:52:03.600664 2673 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7740736a-761c-4d0a-8de4-75a9b1d133a8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:04.175403 kubelet[2673]: I1031 00:52:04.175357 2673 scope.go:117] "RemoveContainer" containerID="92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39" Oct 31 00:52:04.176810 containerd[1589]: time="2025-10-31T00:52:04.176771874Z" level=info msg="RemoveContainer for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\"" Oct 31 00:52:04.183300 containerd[1589]: time="2025-10-31T00:52:04.183262628Z" level=info msg="RemoveContainer for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" returns successfully" Oct 31 00:52:04.183529 kubelet[2673]: I1031 00:52:04.183491 2673 scope.go:117] "RemoveContainer" containerID="480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916" Oct 31 00:52:04.184350 containerd[1589]: time="2025-10-31T00:52:04.184315141Z" level=info msg="RemoveContainer for \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\"" Oct 31 00:52:04.188326 containerd[1589]: time="2025-10-31T00:52:04.188289920Z" level=info msg="RemoveContainer for \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\" returns successfully" Oct 31 00:52:04.189544 kubelet[2673]: I1031 00:52:04.189506 2673 scope.go:117] "RemoveContainer" containerID="b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18" Oct 31 00:52:04.190522 containerd[1589]: time="2025-10-31T00:52:04.190489015Z" level=info msg="RemoveContainer for \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\"" Oct 31 00:52:04.196272 containerd[1589]: time="2025-10-31T00:52:04.196240678Z" level=info msg="RemoveContainer for \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\" returns successfully" Oct 31 00:52:04.196524 kubelet[2673]: I1031 00:52:04.196448 2673 scope.go:117] "RemoveContainer" containerID="2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3" Oct 31 00:52:04.198197 containerd[1589]: time="2025-10-31T00:52:04.198165434Z" level=info msg="RemoveContainer for \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\"" Oct 31 00:52:04.202253 containerd[1589]: time="2025-10-31T00:52:04.202215755Z" level=info msg="RemoveContainer for \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\" returns successfully" Oct 31 00:52:04.202479 kubelet[2673]: I1031 00:52:04.202451 2673 scope.go:117] "RemoveContainer" containerID="7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3" Oct 31 00:52:04.208554 containerd[1589]: time="2025-10-31T00:52:04.208520797Z" level=info msg="RemoveContainer for \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\"" Oct 31 00:52:04.224480 containerd[1589]: time="2025-10-31T00:52:04.224333898Z" level=info msg="RemoveContainer for \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\" returns successfully" Oct 31 00:52:04.224686 kubelet[2673]: I1031 00:52:04.224601 2673 scope.go:117] "RemoveContainer" containerID="92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39" Oct 31 00:52:04.224864 containerd[1589]: time="2025-10-31T00:52:04.224826982Z" level=error msg="ContainerStatus for \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\": not found" Oct 31 00:52:04.233210 kubelet[2673]: E1031 00:52:04.233182 2673 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\": not found" containerID="92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39" Oct 31 00:52:04.233282 kubelet[2673]: I1031 00:52:04.233208 2673 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39"} err="failed to get container status \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\": rpc error: code = NotFound desc = an error occurred when try to find container \"92e21f21672f1ec81fb7e13dd5ee977531eff3c9433243b09cb3534bff9e4c39\": not found" Oct 31 00:52:04.233282 kubelet[2673]: I1031 00:52:04.233279 2673 scope.go:117] "RemoveContainer" containerID="480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916" Oct 31 00:52:04.233513 containerd[1589]: time="2025-10-31T00:52:04.233466014Z" level=error msg="ContainerStatus for \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\": not found" Oct 31 00:52:04.233664 kubelet[2673]: E1031 00:52:04.233575 2673 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\": not found" containerID="480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916" Oct 31 00:52:04.233664 kubelet[2673]: I1031 00:52:04.233602 2673 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916"} err="failed to get container status \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\": rpc error: code = NotFound desc = an error occurred when try to find container \"480b3249fca132ffcc6f2c3697bb6560878ed96a2dcbf95a45061430c1e04916\": not found" Oct 31 00:52:04.233664 kubelet[2673]: I1031 00:52:04.233617 2673 scope.go:117] "RemoveContainer" containerID="b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18" Oct 31 00:52:04.233801 containerd[1589]: time="2025-10-31T00:52:04.233769889Z" level=error msg="ContainerStatus for \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\": not found" Oct 31 00:52:04.233873 kubelet[2673]: E1031 00:52:04.233853 2673 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\": not found" containerID="b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18" Oct 31 00:52:04.233903 kubelet[2673]: I1031 00:52:04.233870 2673 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18"} err="failed to get container status \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9a48ac9d307870f5ba5f4bdf18318490b90bfa67d7fbc79b5a5a8b937051a18\": not found" Oct 31 00:52:04.233903 kubelet[2673]: I1031 00:52:04.233882 2673 scope.go:117] "RemoveContainer" containerID="2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3" Oct 31 00:52:04.234047 containerd[1589]: time="2025-10-31T00:52:04.234011638Z" level=error msg="ContainerStatus for \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\": not found" Oct 31 00:52:04.234195 kubelet[2673]: E1031 00:52:04.234165 2673 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\": not found" containerID="2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3" Oct 31 00:52:04.234232 kubelet[2673]: I1031 00:52:04.234199 2673 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3"} err="failed to get container status \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ca6dded5bbcd9b19589d4cd22c00f89d0b8e1b8ce5b12488dc4f0c939ab8bc3\": not found" Oct 31 00:52:04.234232 kubelet[2673]: I1031 00:52:04.234225 2673 scope.go:117] "RemoveContainer" containerID="7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3" Oct 31 00:52:04.234428 containerd[1589]: time="2025-10-31T00:52:04.234406014Z" level=error msg="ContainerStatus for \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\": not found" Oct 31 00:52:04.234571 kubelet[2673]: E1031 00:52:04.234546 2673 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\": not found" containerID="7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3" Oct 31 00:52:04.234623 kubelet[2673]: I1031 00:52:04.234565 2673 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3"} err="failed to get container status \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b0c14e84b3e2406127095bce7fe6be8f19e089a8e779d79660bdbf4d2d1b1e3\": not found" Oct 31 00:52:04.234623 kubelet[2673]: I1031 00:52:04.234588 2673 scope.go:117] "RemoveContainer" containerID="681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec" Oct 31 00:52:04.235383 containerd[1589]: time="2025-10-31T00:52:04.235351736Z" level=info msg="RemoveContainer for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\"" Oct 31 00:52:04.241210 containerd[1589]: time="2025-10-31T00:52:04.241180516Z" level=info msg="RemoveContainer for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" returns successfully" Oct 31 00:52:04.241598 containerd[1589]: time="2025-10-31T00:52:04.241509318Z" level=error msg="ContainerStatus for \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\": not found" Oct 31 00:52:04.241641 kubelet[2673]: I1031 00:52:04.241323 2673 scope.go:117] "RemoveContainer" containerID="681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec" Oct 31 00:52:04.241641 kubelet[2673]: E1031 00:52:04.241615 2673 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\": not found" containerID="681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec" Oct 31 00:52:04.241641 kubelet[2673]: I1031 00:52:04.241634 2673 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec"} err="failed to get container status \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\": rpc error: code = NotFound desc = an error occurred when try to find container \"681cff9c8d2dd5a5fafd7e2a57752dd8766813a6ce30d164f3d413afc769fcec\": not found" Oct 31 00:52:04.264633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546-rootfs.mount: Deactivated successfully. Oct 31 00:52:04.264828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dff4f416f7d21b3a97a50bcb4e1a8f27fc1fe73ad9252441630aa4c47a4d546-shm.mount: Deactivated successfully. Oct 31 00:52:04.264980 systemd[1]: var-lib-kubelet-pods-349fc37b\x2de73e\x2d4ddb\x2db11e\x2df39a21204d00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvcgsc.mount: Deactivated successfully. Oct 31 00:52:04.265132 systemd[1]: var-lib-kubelet-pods-7740736a\x2d761c\x2d4d0a\x2d8de4\x2d75a9b1d133a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvq8b9.mount: Deactivated successfully. Oct 31 00:52:04.265276 systemd[1]: var-lib-kubelet-pods-7740736a\x2d761c\x2d4d0a\x2d8de4\x2d75a9b1d133a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 31 00:52:04.265433 systemd[1]: var-lib-kubelet-pods-7740736a\x2d761c\x2d4d0a\x2d8de4\x2d75a9b1d133a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 31 00:52:05.207831 sshd[4317]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:05.220580 systemd[1]: Started sshd@24-10.0.0.160:22-10.0.0.1:56044.service - OpenSSH per-connection server daemon (10.0.0.1:56044). Oct 31 00:52:05.221020 systemd[1]: sshd@23-10.0.0.160:22-10.0.0.1:56040.service: Deactivated successfully. Oct 31 00:52:05.224860 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:52:05.225706 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:52:05.226608 systemd-logind[1562]: Removed session 24. Oct 31 00:52:05.250861 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 56044 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:52:05.252421 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:05.256151 systemd-logind[1562]: New session 25 of user core. Oct 31 00:52:05.265616 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:52:05.652632 kubelet[2673]: I1031 00:52:05.652492 2673 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349fc37b-e73e-4ddb-b11e-f39a21204d00" path="/var/lib/kubelet/pods/349fc37b-e73e-4ddb-b11e-f39a21204d00/volumes" Oct 31 00:52:05.653189 kubelet[2673]: I1031 00:52:05.653165 2673 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7740736a-761c-4d0a-8de4-75a9b1d133a8" path="/var/lib/kubelet/pods/7740736a-761c-4d0a-8de4-75a9b1d133a8/volumes" Oct 31 00:52:05.874631 sshd[4487]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:05.882504 kubelet[2673]: I1031 00:52:05.882471 2673 memory_manager.go:355] "RemoveStaleState removing state" podUID="349fc37b-e73e-4ddb-b11e-f39a21204d00" containerName="cilium-operator" Oct 31 00:52:05.882504 kubelet[2673]: I1031 00:52:05.882492 2673 memory_manager.go:355] "RemoveStaleState removing state" podUID="7740736a-761c-4d0a-8de4-75a9b1d133a8" containerName="cilium-agent" Oct 31 00:52:05.886713 systemd[1]: Started sshd@25-10.0.0.160:22-10.0.0.1:56056.service - OpenSSH per-connection server daemon (10.0.0.1:56056). Oct 31 00:52:05.887218 systemd[1]: sshd@24-10.0.0.160:22-10.0.0.1:56044.service: Deactivated successfully. Oct 31 00:52:05.903924 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:52:05.905594 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:52:05.907965 systemd-logind[1562]: Removed session 25. Oct 31 00:52:05.913292 kubelet[2673]: I1031 00:52:05.913256 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-cilium-cgroup\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913292 kubelet[2673]: I1031 00:52:05.913289 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-lib-modules\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913292 kubelet[2673]: I1031 00:52:05.913307 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d61a471-123c-4b93-99f7-0e4018cc65e7-clustermesh-secrets\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913866 kubelet[2673]: I1031 00:52:05.913321 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d61a471-123c-4b93-99f7-0e4018cc65e7-cilium-ipsec-secrets\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913866 kubelet[2673]: I1031 00:52:05.913337 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-host-proc-sys-net\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913866 kubelet[2673]: I1031 00:52:05.913351 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-xtables-lock\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913866 kubelet[2673]: I1031 00:52:05.913380 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-cilium-run\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913866 kubelet[2673]: I1031 00:52:05.913393 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-hostproc\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.913866 kubelet[2673]: I1031 00:52:05.913409 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfnw\" (UniqueName: \"kubernetes.io/projected/8d61a471-123c-4b93-99f7-0e4018cc65e7-kube-api-access-cwfnw\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.914083 kubelet[2673]: I1031 00:52:05.913425 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-etc-cni-netd\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.914083 kubelet[2673]: I1031 00:52:05.913441 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-bpf-maps\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.914083 kubelet[2673]: I1031 00:52:05.913455 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-host-proc-sys-kernel\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.914083 kubelet[2673]: I1031 00:52:05.913470 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d61a471-123c-4b93-99f7-0e4018cc65e7-hubble-tls\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.914083 kubelet[2673]: I1031 00:52:05.913533 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d61a471-123c-4b93-99f7-0e4018cc65e7-cni-path\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.914083 kubelet[2673]: I1031 00:52:05.913825 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d61a471-123c-4b93-99f7-0e4018cc65e7-cilium-config-path\") pod \"cilium-dmwbs\" (UID: \"8d61a471-123c-4b93-99f7-0e4018cc65e7\") " pod="kube-system/cilium-dmwbs" Oct 31 00:52:05.929644 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 56056 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:52:05.931242 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:05.935415 systemd-logind[1562]: New session 26 of user core. Oct 31 00:52:05.950729 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 31 00:52:06.002520 sshd[4501]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:06.014791 systemd[1]: Started sshd@26-10.0.0.160:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Oct 31 00:52:06.016038 systemd[1]: sshd@25-10.0.0.160:22-10.0.0.1:56056.service: Deactivated successfully. Oct 31 00:52:06.038037 systemd[1]: session-26.scope: Deactivated successfully. Oct 31 00:52:06.040018 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Oct 31 00:52:06.041686 systemd-logind[1562]: Removed session 26. Oct 31 00:52:06.049737 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:52:06.051233 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:06.055224 systemd-logind[1562]: New session 27 of user core. Oct 31 00:52:06.062628 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 31 00:52:06.207310 kubelet[2673]: E1031 00:52:06.207282 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:06.208015 containerd[1589]: time="2025-10-31T00:52:06.207667768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dmwbs,Uid:8d61a471-123c-4b93-99f7-0e4018cc65e7,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:06.228355 containerd[1589]: time="2025-10-31T00:52:06.228280339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:06.228726 containerd[1589]: time="2025-10-31T00:52:06.228675388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:06.229440 containerd[1589]: time="2025-10-31T00:52:06.228711176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:06.229440 containerd[1589]: time="2025-10-31T00:52:06.229333114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:06.263761 containerd[1589]: time="2025-10-31T00:52:06.263686034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dmwbs,Uid:8d61a471-123c-4b93-99f7-0e4018cc65e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\"" Oct 31 00:52:06.264348 kubelet[2673]: E1031 00:52:06.264323 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:06.266802 containerd[1589]: time="2025-10-31T00:52:06.266764137Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:52:06.278582 containerd[1589]: time="2025-10-31T00:52:06.278528270Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"467ec95e1ce7767b30a0991ba51b33712f5c4df016f75b1ddf267ca81fc58c1c\"" Oct 31 00:52:06.279210 containerd[1589]: time="2025-10-31T00:52:06.279178833Z" level=info msg="StartContainer for \"467ec95e1ce7767b30a0991ba51b33712f5c4df016f75b1ddf267ca81fc58c1c\"" Oct 31 00:52:06.335113 containerd[1589]: time="2025-10-31T00:52:06.335075959Z" level=info msg="StartContainer for \"467ec95e1ce7767b30a0991ba51b33712f5c4df016f75b1ddf267ca81fc58c1c\" returns successfully" Oct 31 00:52:06.371394 containerd[1589]: time="2025-10-31T00:52:06.371291809Z" level=info msg="shim disconnected" id=467ec95e1ce7767b30a0991ba51b33712f5c4df016f75b1ddf267ca81fc58c1c namespace=k8s.io Oct 31 00:52:06.371394 containerd[1589]: time="2025-10-31T00:52:06.371346894Z" level=warning msg="cleaning up after shim disconnected" id=467ec95e1ce7767b30a0991ba51b33712f5c4df016f75b1ddf267ca81fc58c1c namespace=k8s.io Oct 31 00:52:06.371394 containerd[1589]: time="2025-10-31T00:52:06.371355269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:07.185344 kubelet[2673]: E1031 00:52:07.185303 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:07.187075 containerd[1589]: time="2025-10-31T00:52:07.187042634Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 31 00:52:07.706703 kubelet[2673]: E1031 00:52:07.705961 2673 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:52:07.713916 containerd[1589]: time="2025-10-31T00:52:07.713860427Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771\"" Oct 31 00:52:07.714346 containerd[1589]: time="2025-10-31T00:52:07.714284731Z" level=info msg="StartContainer for \"b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771\"" Oct 31 00:52:07.768309 containerd[1589]: time="2025-10-31T00:52:07.768252577Z" level=info msg="StartContainer for \"b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771\" returns successfully" Oct 31 00:52:07.796772 containerd[1589]: time="2025-10-31T00:52:07.796709958Z" level=info msg="shim disconnected" id=b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771 namespace=k8s.io Oct 31 00:52:07.796772 containerd[1589]: time="2025-10-31T00:52:07.796764221Z" level=warning msg="cleaning up after shim disconnected" id=b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771 namespace=k8s.io Oct 31 00:52:07.796772 containerd[1589]: time="2025-10-31T00:52:07.796773098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:08.023122 systemd[1]: run-containerd-runc-k8s.io-b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771-runc.daOglL.mount: Deactivated successfully. Oct 31 00:52:08.023305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b08e61daa21fcd659c969317ab022e3a4a566e3e9625a70c09e0a2670b4cb771-rootfs.mount: Deactivated successfully. Oct 31 00:52:08.188411 kubelet[2673]: E1031 00:52:08.188312 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:08.190692 containerd[1589]: time="2025-10-31T00:52:08.190639799Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 31 00:52:08.207276 containerd[1589]: time="2025-10-31T00:52:08.207220415Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78f08b134a853ef53b697ef92820ad0f45529dbde37535268d2773db6ef292ac\"" Oct 31 00:52:08.207752 containerd[1589]: time="2025-10-31T00:52:08.207712879Z" level=info msg="StartContainer for \"78f08b134a853ef53b697ef92820ad0f45529dbde37535268d2773db6ef292ac\"" Oct 31 00:52:08.263950 containerd[1589]: time="2025-10-31T00:52:08.263906079Z" level=info msg="StartContainer for \"78f08b134a853ef53b697ef92820ad0f45529dbde37535268d2773db6ef292ac\" returns successfully" Oct 31 00:52:08.289855 containerd[1589]: time="2025-10-31T00:52:08.289712888Z" level=info msg="shim disconnected" id=78f08b134a853ef53b697ef92820ad0f45529dbde37535268d2773db6ef292ac namespace=k8s.io Oct 31 00:52:08.289855 containerd[1589]: time="2025-10-31T00:52:08.289761801Z" level=warning msg="cleaning up after shim disconnected" id=78f08b134a853ef53b697ef92820ad0f45529dbde37535268d2773db6ef292ac namespace=k8s.io Oct 31 00:52:08.289855 containerd[1589]: time="2025-10-31T00:52:08.289772561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:09.023271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78f08b134a853ef53b697ef92820ad0f45529dbde37535268d2773db6ef292ac-rootfs.mount: Deactivated successfully. Oct 31 00:52:09.192680 kubelet[2673]: E1031 00:52:09.192651 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:09.194327 containerd[1589]: time="2025-10-31T00:52:09.194240675Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 31 00:52:09.208603 containerd[1589]: time="2025-10-31T00:52:09.208555391Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d68bd1e6c95a1ebd1163169ab56e51204ffb01a4298bd0fc05251ee778a25d24\"" Oct 31 00:52:09.209752 containerd[1589]: time="2025-10-31T00:52:09.209279033Z" level=info msg="StartContainer for \"d68bd1e6c95a1ebd1163169ab56e51204ffb01a4298bd0fc05251ee778a25d24\"" Oct 31 00:52:09.265979 containerd[1589]: time="2025-10-31T00:52:09.265816081Z" level=info msg="StartContainer for \"d68bd1e6c95a1ebd1163169ab56e51204ffb01a4298bd0fc05251ee778a25d24\" returns successfully" Oct 31 00:52:09.286251 containerd[1589]: time="2025-10-31T00:52:09.286110879Z" level=info msg="shim disconnected" id=d68bd1e6c95a1ebd1163169ab56e51204ffb01a4298bd0fc05251ee778a25d24 namespace=k8s.io Oct 31 00:52:09.286251 containerd[1589]: time="2025-10-31T00:52:09.286171273Z" level=warning msg="cleaning up after shim disconnected" id=d68bd1e6c95a1ebd1163169ab56e51204ffb01a4298bd0fc05251ee778a25d24 namespace=k8s.io Oct 31 00:52:09.286251 containerd[1589]: time="2025-10-31T00:52:09.286182754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:09.697478 kubelet[2673]: I1031 00:52:09.697349 2673 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-31T00:52:09Z","lastTransitionTime":"2025-10-31T00:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 31 00:52:10.023422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d68bd1e6c95a1ebd1163169ab56e51204ffb01a4298bd0fc05251ee778a25d24-rootfs.mount: Deactivated successfully. Oct 31 00:52:10.196598 kubelet[2673]: E1031 00:52:10.196571 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:10.198325 containerd[1589]: time="2025-10-31T00:52:10.198275738Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 31 00:52:11.758459 containerd[1589]: time="2025-10-31T00:52:11.758398584Z" level=info msg="CreateContainer within sandbox \"d793017ed9706172afb7b41a92e10a1c8a2b9540f6596bb2990e3a144678c9f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3970850a7de3ac0f3d39abe8939712f3ff6330f1cb455ccebefe2b3cd21ac32\"" Oct 31 00:52:11.758990 containerd[1589]: time="2025-10-31T00:52:11.758948687Z" level=info msg="StartContainer for \"b3970850a7de3ac0f3d39abe8939712f3ff6330f1cb455ccebefe2b3cd21ac32\"" Oct 31 00:52:11.858452 containerd[1589]: time="2025-10-31T00:52:11.858395213Z" level=info msg="StartContainer for \"b3970850a7de3ac0f3d39abe8939712f3ff6330f1cb455ccebefe2b3cd21ac32\" returns successfully" Oct 31 00:52:12.194392 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 31 00:52:12.203067 kubelet[2673]: E1031 00:52:12.202773 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:12.651440 kubelet[2673]: E1031 00:52:12.651305 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:13.204480 kubelet[2673]: E1031 00:52:13.204453 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:15.223257 systemd-networkd[1263]: lxc_health: Link UP Oct 31 00:52:15.233491 systemd-networkd[1263]: lxc_health: Gained carrier Oct 31 00:52:16.209281 kubelet[2673]: E1031 00:52:16.209051 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:16.211551 kubelet[2673]: E1031 00:52:16.211486 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:16.226605 kubelet[2673]: I1031 00:52:16.226340 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dmwbs" podStartSLOduration=11.226324007 podStartE2EDuration="11.226324007s" podCreationTimestamp="2025-10-31 00:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:12.29299864 +0000 UTC m=+84.740341917" watchObservedRunningTime="2025-10-31 00:52:16.226324007 +0000 UTC m=+88.673667285" Oct 31 00:52:16.602613 systemd-networkd[1263]: lxc_health: Gained IPv6LL Oct 31 00:52:17.213915 kubelet[2673]: E1031 00:52:17.213878 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:21.651275 kubelet[2673]: E1031 00:52:21.651233 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:22.390912 sshd[4511]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:22.394866 systemd[1]: sshd@26-10.0.0.160:22-10.0.0.1:56066.service: Deactivated successfully. Oct 31 00:52:22.397206 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Oct 31 00:52:22.397276 systemd[1]: session-27.scope: Deactivated successfully. Oct 31 00:52:22.398427 systemd-logind[1562]: Removed session 27.