Sep 5 00:12:52.905349 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:12:52.905377 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:12:52.905399 kernel: BIOS-provided physical RAM map: Sep 5 00:12:52.905405 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:12:52.905412 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:12:52.905418 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:12:52.905425 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:12:52.905432 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:12:52.905438 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:12:52.905447 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:12:52.905453 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:12:52.905459 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:12:52.905469 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:12:52.905475 kernel: NX (Execute Disable) protection: active Sep 5 00:12:52.905483 kernel: APIC: Static calls initialized Sep 5 00:12:52.905496 kernel: SMBIOS 2.8 present. Sep 5 00:12:52.905503 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:12:52.905510 kernel: Hypervisor detected: KVM Sep 5 00:12:52.905516 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:12:52.905523 kernel: kvm-clock: using sched offset of 3307985938 cycles Sep 5 00:12:52.905530 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:12:52.905537 kernel: tsc: Detected 2794.750 MHz processor Sep 5 00:12:52.905545 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:12:52.905552 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:12:52.905559 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:12:52.905569 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:12:52.905576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:12:52.905583 kernel: Using GB pages for direct mapping Sep 5 00:12:52.905590 kernel: ACPI: Early table checksum verification disabled Sep 5 00:12:52.905597 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:12:52.905604 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905612 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905619 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905629 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:12:52.905636 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905643 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905650 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905657 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:12:52.905664 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:12:52.905671 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:12:52.905682 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:12:52.905692 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:12:52.905700 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:12:52.905707 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:12:52.905714 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:12:52.905724 kernel: No NUMA configuration found Sep 5 00:12:52.905731 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:12:52.905739 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 5 00:12:52.905749 kernel: Zone ranges: Sep 5 00:12:52.905756 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:12:52.905764 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:12:52.905784 kernel: Normal empty Sep 5 00:12:52.905791 kernel: Movable zone start for each node Sep 5 00:12:52.905798 kernel: Early memory node ranges Sep 5 00:12:52.905806 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:12:52.905813 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:12:52.905820 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:12:52.905831 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:12:52.905841 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:12:52.905848 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:12:52.905856 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:12:52.905863 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:12:52.905870 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:12:52.905877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:12:52.905885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:12:52.905892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:12:52.905902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:12:52.905909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:12:52.905916 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:12:52.905923 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:12:52.905931 kernel: TSC deadline timer available Sep 5 00:12:52.905938 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:12:52.905945 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:12:52.905952 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:12:52.905962 kernel: kvm-guest: setup PV sched yield Sep 5 00:12:52.905972 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:12:52.905979 kernel: Booting paravirtualized kernel on KVM Sep 5 00:12:52.905987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:12:52.905994 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:12:52.906002 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:12:52.906009 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:12:52.906016 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:12:52.906023 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:12:52.906030 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:12:52.906042 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:12:52.906049 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:12:52.906057 kernel: random: crng init done Sep 5 00:12:52.906064 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:12:52.906072 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:12:52.906079 kernel: Fallback order for Node 0: 0 Sep 5 00:12:52.906086 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 5 00:12:52.906094 kernel: Policy zone: DMA32 Sep 5 00:12:52.906104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:12:52.906111 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 136900K reserved, 0K cma-reserved) Sep 5 00:12:52.906119 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:12:52.906126 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:12:52.906133 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:12:52.906141 kernel: Dynamic Preempt: voluntary Sep 5 00:12:52.906148 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:12:52.906156 kernel: rcu: RCU event tracing is enabled. Sep 5 00:12:52.906164 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:12:52.906174 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:12:52.906181 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:12:52.906189 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:12:52.906196 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:12:52.906206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:12:52.906213 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:12:52.906221 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:12:52.906228 kernel: Console: colour VGA+ 80x25 Sep 5 00:12:52.906235 kernel: printk: console [ttyS0] enabled Sep 5 00:12:52.906243 kernel: ACPI: Core revision 20230628 Sep 5 00:12:52.906253 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:12:52.906260 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:12:52.906267 kernel: x2apic enabled Sep 5 00:12:52.906275 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:12:52.906282 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:12:52.906290 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:12:52.906297 kernel: kvm-guest: setup PV IPIs Sep 5 00:12:52.906317 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:12:52.906325 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:12:52.906332 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 5 00:12:52.906340 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:12:52.906350 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:12:52.906358 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:12:52.906365 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:12:52.906373 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:12:52.906387 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:12:52.906398 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:12:52.906405 kernel: active return thunk: retbleed_return_thunk Sep 5 00:12:52.906415 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:12:52.906423 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:12:52.906431 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:12:52.906439 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:12:52.906447 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:12:52.906454 kernel: active return thunk: srso_return_thunk Sep 5 00:12:52.906465 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:12:52.906473 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:12:52.906480 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:12:52.906488 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:12:52.906495 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:12:52.906503 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:12:52.906511 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:12:52.906519 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:12:52.906526 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:12:52.906536 kernel: landlock: Up and running. Sep 5 00:12:52.906544 kernel: SELinux: Initializing. Sep 5 00:12:52.906552 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:12:52.906559 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:12:52.906567 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:12:52.906575 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:12:52.906582 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:12:52.906590 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:12:52.906600 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:12:52.906611 kernel: ... version: 0 Sep 5 00:12:52.906618 kernel: ... bit width: 48 Sep 5 00:12:52.906626 kernel: ... generic registers: 6 Sep 5 00:12:52.906634 kernel: ... value mask: 0000ffffffffffff Sep 5 00:12:52.906642 kernel: ... max period: 00007fffffffffff Sep 5 00:12:52.906649 kernel: ... fixed-purpose events: 0 Sep 5 00:12:52.906657 kernel: ... event mask: 000000000000003f Sep 5 00:12:52.906664 kernel: signal: max sigframe size: 1776 Sep 5 00:12:52.906672 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:12:52.906682 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:12:52.906690 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:12:52.906697 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:12:52.906705 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:12:52.906712 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:12:52.906720 kernel: smpboot: Max logical packages: 1 Sep 5 00:12:52.906727 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 5 00:12:52.906735 kernel: devtmpfs: initialized Sep 5 00:12:52.906742 kernel: x86/mm: Memory block size: 128MB Sep 5 00:12:52.906753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:12:52.906761 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:12:52.906769 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:12:52.906788 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:12:52.906796 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:12:52.906803 kernel: audit: type=2000 audit(1757031171.794:1): state=initialized audit_enabled=0 res=1 Sep 5 00:12:52.906811 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:12:52.906818 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:12:52.906826 kernel: cpuidle: using governor menu Sep 5 00:12:52.906837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:12:52.906845 kernel: dca service started, version 1.12.1 Sep 5 00:12:52.906852 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:12:52.906861 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:12:52.906868 kernel: PCI: Using configuration type 1 for base access Sep 5 00:12:52.906876 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:12:52.906884 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:12:52.906894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:12:52.906902 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:12:52.906913 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:12:52.906920 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:12:52.906927 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:12:52.906935 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:12:52.906943 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:12:52.906950 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:12:52.906958 kernel: ACPI: Interpreter enabled Sep 5 00:12:52.906965 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:12:52.906973 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:12:52.906985 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:12:52.906993 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:12:52.907001 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:12:52.907009 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:12:52.907277 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:12:52.907433 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:12:52.907566 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:12:52.907576 kernel: PCI host bridge to bus 0000:00 Sep 5 00:12:52.907729 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:12:52.907869 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:12:52.907990 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:12:52.908107 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:12:52.908228 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:12:52.908344 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:12:52.908475 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:12:52.908714 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:12:52.908901 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:12:52.909034 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 5 00:12:52.909161 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 5 00:12:52.909289 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:12:52.909426 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:12:52.909580 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:12:52.909710 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 5 00:12:52.909856 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 5 00:12:52.909986 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:12:52.910132 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:12:52.910333 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 5 00:12:52.910487 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 5 00:12:52.910638 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:12:52.910856 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:12:52.910991 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 5 00:12:52.911119 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 5 00:12:52.911247 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:12:52.911376 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:12:52.911532 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:12:52.911669 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:12:52.911833 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:12:52.911964 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 5 00:12:52.912091 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 5 00:12:52.912239 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:12:52.912368 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 5 00:12:52.912403 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:12:52.912421 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:12:52.912444 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:12:52.912461 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:12:52.912469 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:12:52.912477 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:12:52.912488 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:12:52.912496 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:12:52.912504 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:12:52.912515 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:12:52.912523 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:12:52.912531 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:12:52.912539 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:12:52.912547 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:12:52.912554 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:12:52.912567 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:12:52.912575 kernel: iommu: Default domain type: Translated Sep 5 00:12:52.912583 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:12:52.912594 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:12:52.912602 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:12:52.912610 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:12:52.912618 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:12:52.912755 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:12:52.912900 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:12:52.913028 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:12:52.913039 kernel: vgaarb: loaded Sep 5 00:12:52.913051 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:12:52.913059 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:12:52.913067 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:12:52.913075 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:12:52.913083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:12:52.913091 kernel: pnp: PnP ACPI init Sep 5 00:12:52.913249 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:12:52.913261 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:12:52.913273 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:12:52.913280 kernel: NET: Registered PF_INET protocol family Sep 5 00:12:52.913288 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:12:52.913296 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:12:52.913304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:12:52.913312 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:12:52.913320 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:12:52.913327 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:12:52.913335 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:12:52.913345 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:12:52.913353 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:12:52.913361 kernel: NET: Registered PF_XDP protocol family Sep 5 00:12:52.913491 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:12:52.913618 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:12:52.913801 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:12:52.913923 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:12:52.914039 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:12:52.914155 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:12:52.914171 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:12:52.914179 kernel: Initialise system trusted keyrings Sep 5 00:12:52.914187 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:12:52.914195 kernel: Key type asymmetric registered Sep 5 00:12:52.914203 kernel: Asymmetric key parser 'x509' registered Sep 5 00:12:52.914211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:12:52.914219 kernel: io scheduler mq-deadline registered Sep 5 00:12:52.914227 kernel: io scheduler kyber registered Sep 5 00:12:52.914235 kernel: io scheduler bfq registered Sep 5 00:12:52.914246 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:12:52.914254 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:12:52.914262 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:12:52.914270 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:12:52.914278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:12:52.914286 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:12:52.914294 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:12:52.914302 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:12:52.914310 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:12:52.914464 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:12:52.914477 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:12:52.914597 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:12:52.914717 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:12:52 UTC (1757031172) Sep 5 00:12:52.914856 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:12:52.914867 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:12:52.914876 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:12:52.914888 kernel: Segment Routing with IPv6 Sep 5 00:12:52.914896 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:12:52.914903 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:12:52.914911 kernel: Key type dns_resolver registered Sep 5 00:12:52.914919 kernel: IPI shorthand broadcast: enabled Sep 5 00:12:52.914927 kernel: sched_clock: Marking stable (922003176, 102698610)->(1042129162, -17427376) Sep 5 00:12:52.914935 kernel: registered taskstats version 1 Sep 5 00:12:52.914943 kernel: Loading compiled-in X.509 certificates Sep 5 00:12:52.914952 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:12:52.914960 kernel: Key type .fscrypt registered Sep 5 00:12:52.914971 kernel: Key type fscrypt-provisioning registered Sep 5 00:12:52.914979 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:12:52.914987 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:12:52.914994 kernel: ima: No architecture policies found Sep 5 00:12:52.915002 kernel: clk: Disabling unused clocks Sep 5 00:12:52.915010 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:12:52.915018 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:12:52.915026 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:12:52.915036 kernel: Run /init as init process Sep 5 00:12:52.915045 kernel: with arguments: Sep 5 00:12:52.915052 kernel: /init Sep 5 00:12:52.915060 kernel: with environment: Sep 5 00:12:52.915068 kernel: HOME=/ Sep 5 00:12:52.915076 kernel: TERM=linux Sep 5 00:12:52.915083 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:12:52.915094 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:12:52.915107 systemd[1]: Detected virtualization kvm. Sep 5 00:12:52.915116 systemd[1]: Detected architecture x86-64. Sep 5 00:12:52.915124 systemd[1]: Running in initrd. Sep 5 00:12:52.915133 systemd[1]: No hostname configured, using default hostname. Sep 5 00:12:52.915141 systemd[1]: Hostname set to . Sep 5 00:12:52.915149 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:12:52.915158 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:12:52.915166 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:12:52.915177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:12:52.915187 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:12:52.915208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:12:52.915220 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:12:52.915229 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:12:52.915242 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:12:52.915250 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:12:52.915259 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:12:52.915268 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:12:52.915276 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:12:52.915285 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:12:52.915294 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:12:52.915303 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:12:52.915314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:12:52.915322 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:12:52.915331 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:12:52.915340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:12:52.915348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:12:52.915357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:12:52.915366 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:12:52.915375 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:12:52.915392 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:12:52.915404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:12:52.915413 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:12:52.915421 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:12:52.915430 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:12:52.915439 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:12:52.915447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:12:52.915456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:12:52.915465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:12:52.915476 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:12:52.915511 systemd-journald[193]: Collecting audit messages is disabled. Sep 5 00:12:52.915535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:12:52.915547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:12:52.915556 systemd-journald[193]: Journal started Sep 5 00:12:52.915577 systemd-journald[193]: Runtime Journal (/run/log/journal/2e0a25e6782e4e6f85d6afe3c5b4fdb6) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:12:52.910840 systemd-modules-load[194]: Inserted module 'overlay' Sep 5 00:12:52.941112 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:12:52.945791 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:12:52.946930 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:12:52.950268 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 5 00:12:52.952222 kernel: Bridge firewalling registered Sep 5 00:12:52.950332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:12:52.950982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:12:52.953043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:12:52.959982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:12:52.964024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:12:52.965427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:12:52.966338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:12:52.992171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:12:52.994168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:12:53.002859 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:12:53.004583 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:12:53.021477 dracut-cmdline[230]: dracut-dracut-053 Sep 5 00:12:53.024461 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:12:53.033067 systemd-resolved[224]: Positive Trust Anchors: Sep 5 00:12:53.033085 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:12:53.033116 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:12:53.035829 systemd-resolved[224]: Defaulting to hostname 'linux'. Sep 5 00:12:53.037041 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:12:53.042678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:12:53.139860 kernel: SCSI subsystem initialized Sep 5 00:12:53.153410 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:12:53.166823 kernel: iscsi: registered transport (tcp) Sep 5 00:12:53.189830 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:12:53.189924 kernel: QLogic iSCSI HBA Driver Sep 5 00:12:53.247172 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:12:53.263069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:12:53.288862 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:12:53.288945 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:12:53.289894 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:12:53.332815 kernel: raid6: avx2x4 gen() 30407 MB/s Sep 5 00:12:53.349800 kernel: raid6: avx2x2 gen() 30648 MB/s Sep 5 00:12:53.366850 kernel: raid6: avx2x1 gen() 23729 MB/s Sep 5 00:12:53.366908 kernel: raid6: using algorithm avx2x2 gen() 30648 MB/s Sep 5 00:12:53.384998 kernel: raid6: .... xor() 18777 MB/s, rmw enabled Sep 5 00:12:53.385100 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:12:53.406822 kernel: xor: automatically using best checksumming function avx Sep 5 00:12:53.567836 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:12:53.582495 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:12:53.593972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:12:53.608242 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 5 00:12:53.613229 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:12:53.619925 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:12:53.636223 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 5 00:12:53.670906 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:12:53.685110 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:12:53.752692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:12:53.763113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:12:53.778134 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:12:53.781786 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:12:53.783054 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:12:53.786513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:12:53.804994 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:12:53.805053 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:12:53.805273 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:12:53.803737 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:12:53.809282 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:12:53.809348 kernel: AES CTR mode by8 optimization enabled Sep 5 00:12:53.819548 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:12:53.819597 kernel: GPT:9289727 != 19775487 Sep 5 00:12:53.819609 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:12:53.819619 kernel: GPT:9289727 != 19775487 Sep 5 00:12:53.819629 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:12:53.819639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:12:53.824822 kernel: libata version 3.00 loaded. Sep 5 00:12:53.825832 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:12:53.835808 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:12:53.838194 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:12:53.842388 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:12:53.848224 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:12:53.848491 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:12:53.848699 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (473) Sep 5 00:12:53.842466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:12:53.848601 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:12:53.848887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:12:53.848955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:12:53.856921 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:12:53.861799 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Sep 5 00:12:53.861827 kernel: scsi host0: ahci Sep 5 00:12:53.866949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:12:53.870406 kernel: scsi host1: ahci Sep 5 00:12:53.878255 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:12:53.879800 kernel: scsi host2: ahci Sep 5 00:12:53.880952 kernel: scsi host3: ahci Sep 5 00:12:53.881848 kernel: scsi host4: ahci Sep 5 00:12:53.884794 kernel: scsi host5: ahci Sep 5 00:12:53.884994 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 5 00:12:53.885007 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 5 00:12:53.885024 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 5 00:12:53.885034 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 5 00:12:53.886521 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 5 00:12:53.886534 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 5 00:12:53.892244 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:12:53.900276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:12:53.931127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:12:53.937035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:12:53.945930 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:12:53.946437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:12:53.949245 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:12:53.960439 disk-uuid[566]: Primary Header is updated. Sep 5 00:12:53.960439 disk-uuid[566]: Secondary Entries is updated. Sep 5 00:12:53.960439 disk-uuid[566]: Secondary Header is updated. Sep 5 00:12:53.963866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:12:53.966725 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:12:53.970141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:12:54.198828 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:12:54.198925 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:12:54.198959 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:12:54.198974 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:12:54.199794 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:12:54.200804 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:12:54.201807 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:12:54.203082 kernel: ata3.00: applying bridge limits Sep 5 00:12:54.203095 kernel: ata3.00: configured for UDMA/100 Sep 5 00:12:54.203800 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:12:54.259221 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:12:54.259710 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:12:54.272820 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:12:54.969808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:12:54.969886 disk-uuid[572]: The operation has completed successfully. Sep 5 00:12:54.999891 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:12:55.000015 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:12:55.026921 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:12:55.030583 sh[592]: Success Sep 5 00:12:55.044800 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:12:55.082991 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:12:55.097380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:12:55.100498 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:12:55.115554 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:12:55.115586 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:12:55.115597 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:12:55.116649 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:12:55.117443 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:12:55.122307 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:12:55.123947 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:12:55.131934 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:12:55.133657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:12:55.143872 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:12:55.143901 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:12:55.143912 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:12:55.146815 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:12:55.156153 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:12:55.158079 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:12:55.246725 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:12:55.258911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:12:55.281269 systemd-networkd[770]: lo: Link UP Sep 5 00:12:55.281280 systemd-networkd[770]: lo: Gained carrier Sep 5 00:12:55.282947 systemd-networkd[770]: Enumeration completed Sep 5 00:12:55.283375 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:12:55.283379 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:12:55.284421 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:12:55.290181 systemd[1]: Reached target network.target - Network. Sep 5 00:12:55.485382 systemd-networkd[770]: eth0: Link UP Sep 5 00:12:55.485395 systemd-networkd[770]: eth0: Gained carrier Sep 5 00:12:55.485413 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:12:55.490578 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:12:55.504975 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:12:55.506830 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:12:55.707187 ignition[774]: Ignition 2.19.0 Sep 5 00:12:55.707203 ignition[774]: Stage: fetch-offline Sep 5 00:12:55.707254 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:12:55.707268 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:12:55.707424 ignition[774]: parsed url from cmdline: "" Sep 5 00:12:55.707428 ignition[774]: no config URL provided Sep 5 00:12:55.707434 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:12:55.707443 ignition[774]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:12:55.707477 ignition[774]: op(1): [started] loading QEMU firmware config module Sep 5 00:12:55.707483 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:12:55.720887 ignition[774]: op(1): [finished] loading QEMU firmware config module Sep 5 00:12:55.759247 ignition[774]: parsing config with SHA512: 9c7f98ffa571bc5ff663db09a546db041de3052b5a1cd2419b78e075b39d46112721998ab95368a0e5aec9273bc6f032783005b2e4abcf0261d66c6543a6e0b6 Sep 5 00:12:55.764767 unknown[774]: fetched base config from "system" Sep 5 00:12:55.764805 unknown[774]: fetched user config from "qemu" Sep 5 00:12:55.765211 ignition[774]: fetch-offline: fetch-offline passed Sep 5 00:12:55.765870 systemd-resolved[224]: Detected conflict on linux IN A 10.0.0.79 Sep 5 00:12:55.765289 ignition[774]: Ignition finished successfully Sep 5 00:12:55.765880 systemd-resolved[224]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 5 00:12:55.767955 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:12:55.770115 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:12:55.781007 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:12:55.798090 ignition[784]: Ignition 2.19.0 Sep 5 00:12:55.798101 ignition[784]: Stage: kargs Sep 5 00:12:55.798278 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:12:55.798290 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:12:55.799948 ignition[784]: kargs: kargs passed Sep 5 00:12:55.799998 ignition[784]: Ignition finished successfully Sep 5 00:12:55.803336 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:12:55.816004 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:12:55.831583 ignition[792]: Ignition 2.19.0 Sep 5 00:12:55.831595 ignition[792]: Stage: disks Sep 5 00:12:55.831829 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:12:55.831845 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:12:55.832856 ignition[792]: disks: disks passed Sep 5 00:12:55.835351 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:12:55.832909 ignition[792]: Ignition finished successfully Sep 5 00:12:55.837162 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:12:55.838972 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:12:55.840915 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:12:55.842926 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:12:55.845183 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:12:55.858914 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:12:55.872868 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:12:55.879182 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:12:55.889915 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:12:55.990801 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:12:55.991211 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:12:55.992202 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:12:56.008857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:12:56.010767 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:12:56.012254 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:12:56.012290 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:12:56.019792 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Sep 5 00:12:56.019822 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:12:56.012325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:12:56.025411 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:12:56.025434 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:12:56.025448 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:12:56.020987 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:12:56.026462 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:12:56.028992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:12:56.064004 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:12:56.069383 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:12:56.074621 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:12:56.078937 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:12:56.211703 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:12:56.225915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:12:56.228196 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:12:56.233627 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:12:56.234907 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:12:56.255443 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:12:56.348821 ignition[922]: INFO : Ignition 2.19.0 Sep 5 00:12:56.348821 ignition[922]: INFO : Stage: mount Sep 5 00:12:56.350967 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:12:56.350967 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:12:56.350967 ignition[922]: INFO : mount: mount passed Sep 5 00:12:56.350967 ignition[922]: INFO : Ignition finished successfully Sep 5 00:12:56.353256 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:12:56.365888 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:12:56.374162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:12:56.386192 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Sep 5 00:12:56.386233 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:12:56.386249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:12:56.387799 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:12:56.390802 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:12:56.391718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:12:56.419579 ignition[953]: INFO : Ignition 2.19.0 Sep 5 00:12:56.419579 ignition[953]: INFO : Stage: files Sep 5 00:12:56.421410 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:12:56.421410 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:12:56.421410 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:12:56.425489 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:12:56.425489 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:12:56.428629 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:12:56.430184 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:12:56.432015 unknown[953]: wrote ssh authorized keys file for user: core Sep 5 00:12:56.433250 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:12:56.435692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 00:12:56.437711 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 5 00:12:56.492414 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:12:56.812898 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 00:12:56.812898 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:12:56.816862 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 00:12:57.046271 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:12:57.292151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:12:57.292151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:12:57.296102 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 5 00:12:57.511101 systemd-networkd[770]: eth0: Gained IPv6LL Sep 5 00:12:57.819625 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:12:58.189400 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:12:58.189400 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 00:12:58.193018 ignition[953]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:12:58.215091 ignition[953]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:12:58.221143 ignition[953]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:12:58.222719 ignition[953]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:12:58.222719 ignition[953]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:12:58.222719 ignition[953]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:12:58.222719 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:12:58.222719 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:12:58.222719 ignition[953]: INFO : files: files passed Sep 5 00:12:58.222719 ignition[953]: INFO : Ignition finished successfully Sep 5 00:12:58.224563 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:12:58.239944 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:12:58.242725 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:12:58.244669 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:12:58.244809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:12:58.253170 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:12:58.255985 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:12:58.255985 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:12:58.259263 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:12:58.262392 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:12:58.265048 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:12:58.280967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:12:58.310066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:12:58.310202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:12:58.312478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:12:58.314483 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:12:58.315579 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:12:58.316417 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:12:58.334251 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:12:58.345895 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:12:58.354543 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:12:58.355792 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:12:58.357900 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:12:58.359854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:12:58.359966 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:12:58.361996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:12:58.363655 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:12:58.365606 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:12:58.367729 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:12:58.369708 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:12:58.371764 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:12:58.373807 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:12:58.375992 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:12:58.377936 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:12:58.380074 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:12:58.381796 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:12:58.381909 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:12:58.383967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:12:58.385529 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:12:58.387530 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:12:58.387634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:12:58.389689 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:12:58.389818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:12:58.392116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:12:58.392233 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:12:58.394038 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:12:58.395700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:12:58.398815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:12:58.400104 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:12:58.401927 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:12:58.403886 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:12:58.403983 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:12:58.405627 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:12:58.405718 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:12:58.407596 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:12:58.407712 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:12:58.410307 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:12:58.410412 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:12:58.422910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:12:58.423849 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:12:58.423967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:12:58.426719 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:12:58.428302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:12:58.428511 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:12:58.430913 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:12:58.431129 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:12:58.437926 ignition[1009]: INFO : Ignition 2.19.0 Sep 5 00:12:58.437926 ignition[1009]: INFO : Stage: umount Sep 5 00:12:58.437926 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:12:58.437926 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:12:58.436048 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:12:58.445575 ignition[1009]: INFO : umount: umount passed Sep 5 00:12:58.445575 ignition[1009]: INFO : Ignition finished successfully Sep 5 00:12:58.436227 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:12:58.439947 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:12:58.440065 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:12:58.442212 systemd[1]: Stopped target network.target - Network. Sep 5 00:12:58.443700 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:12:58.443766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:12:58.445621 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:12:58.445673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:12:58.447394 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:12:58.447443 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:12:58.449262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:12:58.449314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:12:58.451586 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:12:58.453480 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:12:58.456746 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:12:58.456841 systemd-networkd[770]: eth0: DHCPv6 lease lost Sep 5 00:12:58.459514 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:12:58.459656 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:12:58.463147 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:12:58.463300 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:12:58.466356 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:12:58.466440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:12:58.482971 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:12:58.484904 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:12:58.484982 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:12:58.487427 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:12:58.487503 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:12:58.490899 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:12:58.490964 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:12:58.493184 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:12:58.493236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:12:58.498623 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:12:58.509045 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:12:58.510073 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:12:58.520604 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:12:58.521669 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:12:58.524308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:12:58.524370 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:12:58.527461 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:12:58.527512 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:12:58.529424 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:12:58.529478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:12:58.530479 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:12:58.530527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:12:58.534835 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:12:58.534886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:12:58.539907 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:12:58.540149 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:12:58.540205 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:12:58.542393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:12:58.542442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:12:58.549956 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:12:58.550081 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:12:58.682877 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:12:58.683025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:12:58.685041 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:12:58.686650 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:12:58.686710 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:12:58.693903 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:12:58.701328 systemd[1]: Switching root. Sep 5 00:12:58.729611 systemd-journald[193]: Journal stopped Sep 5 00:13:00.026177 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 5 00:13:00.026272 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:13:00.026299 kernel: SELinux: policy capability open_perms=1 Sep 5 00:13:00.026311 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:13:00.026323 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:13:00.026335 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:13:00.026347 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:13:00.026367 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:13:00.026379 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:13:00.026391 kernel: audit: type=1403 audit(1757031179.265:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:13:00.026405 systemd[1]: Successfully loaded SELinux policy in 39.997ms. Sep 5 00:13:00.026432 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.494ms. Sep 5 00:13:00.026446 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:13:00.026459 systemd[1]: Detected virtualization kvm. Sep 5 00:13:00.026471 systemd[1]: Detected architecture x86-64. Sep 5 00:13:00.026487 systemd[1]: Detected first boot. Sep 5 00:13:00.026505 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:13:00.026518 zram_generator::config[1053]: No configuration found. Sep 5 00:13:00.026534 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:13:00.026546 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:13:00.026558 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:13:00.026571 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:13:00.026584 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:13:00.026602 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:13:00.026618 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:13:00.026630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:13:00.026643 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:13:00.026656 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:13:00.026669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:13:00.026681 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:13:00.026694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:13:00.026706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:13:00.026722 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:13:00.026735 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:13:00.026747 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:13:00.026760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:13:00.026785 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:13:00.026798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:13:00.026810 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:13:00.026824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:13:00.026836 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:13:00.026853 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:13:00.026865 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:13:00.026878 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:13:00.026891 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:13:00.026903 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:13:00.026916 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:13:00.026928 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:13:00.026940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:13:00.026960 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:13:00.026973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:13:00.026986 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:13:00.026998 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:13:00.027011 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:13:00.027023 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:13:00.027035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:00.027048 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:13:00.027061 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:13:00.027080 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:13:00.027093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:13:00.027107 systemd[1]: Reached target machines.target - Containers. Sep 5 00:13:00.027119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:13:00.027131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:13:00.027144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:13:00.027157 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:13:00.027169 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:13:00.027184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:13:00.027197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:13:00.027218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:13:00.027230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:13:00.027243 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:13:00.027255 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:13:00.027267 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:13:00.027279 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:13:00.027291 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:13:00.027307 kernel: fuse: init (API version 7.39) Sep 5 00:13:00.027318 kernel: loop: module loaded Sep 5 00:13:00.027330 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:13:00.027343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:13:00.027355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:13:00.027368 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:13:00.027381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:13:00.027394 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:13:00.027407 systemd[1]: Stopped verity-setup.service. Sep 5 00:13:00.027421 kernel: ACPI: bus type drm_connector registered Sep 5 00:13:00.027434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:00.027446 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:13:00.027476 systemd-journald[1123]: Collecting audit messages is disabled. Sep 5 00:13:00.027498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:13:00.027511 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:13:00.027523 systemd-journald[1123]: Journal started Sep 5 00:13:00.027548 systemd-journald[1123]: Runtime Journal (/run/log/journal/2e0a25e6782e4e6f85d6afe3c5b4fdb6) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:12:59.787417 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:12:59.811797 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:12:59.812289 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:13:00.029913 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:13:00.031480 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:13:00.032726 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:13:00.033981 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:13:00.035252 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:13:00.036933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:13:00.038533 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:13:00.038724 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:13:00.040239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:13:00.040428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:13:00.041877 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:13:00.042063 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:13:00.043575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:13:00.043759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:13:00.045298 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:13:00.045482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:13:00.047004 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:13:00.047189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:13:00.048621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:13:00.050337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:13:00.051882 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:13:00.068850 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:13:00.080988 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:13:00.083753 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:13:00.084912 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:13:00.084949 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:13:00.087038 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:13:00.093187 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:13:00.096185 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:13:00.097660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:13:00.100727 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:13:00.106895 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:13:00.108476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:13:00.110685 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:13:00.112223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:13:00.113990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:13:00.118322 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:13:00.120674 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:13:00.131543 systemd-journald[1123]: Time spent on flushing to /var/log/journal/2e0a25e6782e4e6f85d6afe3c5b4fdb6 is 27.815ms for 956 entries. Sep 5 00:13:00.131543 systemd-journald[1123]: System Journal (/var/log/journal/2e0a25e6782e4e6f85d6afe3c5b4fdb6) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:13:00.171161 systemd-journald[1123]: Received client request to flush runtime journal. Sep 5 00:13:00.171218 kernel: loop0: detected capacity change from 0 to 224512 Sep 5 00:13:00.125890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:13:00.127327 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:13:00.128711 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:13:00.130370 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:13:00.147799 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:13:00.152311 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:13:00.155484 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:13:00.166027 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:13:00.167610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:13:00.175144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:13:00.178118 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:13:00.188804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:13:00.191358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:13:00.201981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:13:00.204392 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:13:00.205322 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:13:00.218932 kernel: loop1: detected capacity change from 0 to 140768 Sep 5 00:13:00.230556 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 5 00:13:00.231076 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 5 00:13:00.238353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:13:00.257814 kernel: loop2: detected capacity change from 0 to 142488 Sep 5 00:13:00.300044 kernel: loop3: detected capacity change from 0 to 224512 Sep 5 00:13:00.309828 kernel: loop4: detected capacity change from 0 to 140768 Sep 5 00:13:00.323802 kernel: loop5: detected capacity change from 0 to 142488 Sep 5 00:13:00.331953 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:13:00.334245 (sd-merge)[1192]: Merged extensions into '/usr'. Sep 5 00:13:00.339214 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:13:00.339229 systemd[1]: Reloading... Sep 5 00:13:00.403812 zram_generator::config[1224]: No configuration found. Sep 5 00:13:00.466107 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:13:00.527361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:13:00.582712 systemd[1]: Reloading finished in 242 ms. Sep 5 00:13:00.619764 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:13:00.621499 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:13:00.634008 systemd[1]: Starting ensure-sysext.service... Sep 5 00:13:00.636318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:13:00.646732 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:13:00.646750 systemd[1]: Reloading... Sep 5 00:13:00.667108 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:13:00.667626 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:13:00.669855 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:13:00.670250 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 5 00:13:00.670402 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 5 00:13:00.674422 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:13:00.674503 systemd-tmpfiles[1256]: Skipping /boot Sep 5 00:13:00.692323 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:13:00.692340 systemd-tmpfiles[1256]: Skipping /boot Sep 5 00:13:00.707233 zram_generator::config[1286]: No configuration found. Sep 5 00:13:00.818035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:13:00.868512 systemd[1]: Reloading finished in 221 ms. Sep 5 00:13:00.888412 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:13:00.890121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:13:00.909880 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:13:00.912551 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:13:00.914943 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:13:00.918970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:13:00.929993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:13:00.933068 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:13:00.941358 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:13:00.944362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:00.944544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:13:00.946816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:13:00.949959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:13:00.953758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:13:00.954945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:13:00.955046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:00.957546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:13:00.958258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:13:00.960708 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:13:00.961384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:13:00.971373 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:13:00.973551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:13:00.973729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:13:00.976540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:00.976982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:13:00.983848 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Sep 5 00:13:00.988146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:13:00.989384 augenrules[1351]: No rules Sep 5 00:13:00.992148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:13:00.994070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:13:00.998024 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:13:00.999071 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:01.000088 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:13:01.002359 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:13:01.005691 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:13:01.008226 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:13:01.010705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:13:01.011011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:13:01.013054 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:13:01.013251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:13:01.015097 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:13:01.020547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:13:01.040924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:01.041242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:13:01.047090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:13:01.058880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:13:01.063100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:13:01.067225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:13:01.068418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:13:01.071938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:13:01.073372 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:13:01.073529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:13:01.075302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:13:01.075549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:13:01.078469 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:13:01.078656 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:13:01.080988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:13:01.082839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:13:01.094309 systemd[1]: Finished ensure-sysext.service. Sep 5 00:13:01.098282 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1389) Sep 5 00:13:01.096980 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:13:01.097187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:13:01.112358 systemd-resolved[1326]: Positive Trust Anchors: Sep 5 00:13:01.112374 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:13:01.112406 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:13:01.113589 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:13:01.116566 systemd-resolved[1326]: Defaulting to hostname 'linux'. Sep 5 00:13:01.118709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:13:01.123760 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:13:01.124978 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:13:01.125048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:13:01.134574 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:13:01.140227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:13:01.142806 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 5 00:13:01.149970 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:13:01.152927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:13:01.172953 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:13:01.185809 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 00:13:01.186390 systemd-networkd[1394]: lo: Link UP Sep 5 00:13:01.186404 systemd-networkd[1394]: lo: Gained carrier Sep 5 00:13:01.188599 systemd-networkd[1394]: Enumeration completed Sep 5 00:13:01.188693 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:13:01.190164 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:13:01.190281 systemd[1]: Reached target network.target - Network. Sep 5 00:13:01.190300 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:13:01.191437 systemd-networkd[1394]: eth0: Link UP Sep 5 00:13:01.191441 systemd-networkd[1394]: eth0: Gained carrier Sep 5 00:13:01.191453 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:13:01.198967 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:13:01.205067 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:13:01.216304 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:13:01.216511 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:13:01.208845 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:13:01.223056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:13:01.225272 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:13:01.228577 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:13:01.228638 systemd-timesyncd[1402]: Initial clock synchronization to Fri 2025-09-05 00:13:01.400775 UTC. Sep 5 00:13:01.231208 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:13:01.237817 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:13:01.319812 kernel: kvm_amd: TSC scaling supported Sep 5 00:13:01.320041 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:13:01.320079 kernel: kvm_amd: Nested Paging enabled Sep 5 00:13:01.320109 kernel: kvm_amd: LBR virtualization supported Sep 5 00:13:01.320164 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:13:01.320205 kernel: kvm_amd: Virtual GIF supported Sep 5 00:13:01.339854 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:13:01.345601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:13:01.381245 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:13:01.393049 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:13:01.402673 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:13:01.433395 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:13:01.435023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:13:01.436129 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:13:01.437290 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:13:01.438555 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:13:01.440044 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:13:01.441242 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:13:01.442489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:13:01.443687 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:13:01.443712 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:13:01.444590 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:13:01.446353 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:13:01.449176 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:13:01.459495 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:13:01.462006 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:13:01.463569 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:13:01.464732 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:13:01.465685 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:13:01.466633 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:13:01.466658 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:13:01.467675 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:13:01.469729 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:13:01.473877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:13:01.478031 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:13:01.479342 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:13:01.484793 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:13:01.483224 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:13:01.485078 jq[1431]: false Sep 5 00:13:01.488124 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:13:01.493122 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:13:01.496982 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:13:01.504691 extend-filesystems[1432]: Found loop3 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found loop4 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found loop5 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found sr0 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda1 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda2 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda3 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found usr Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda4 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda6 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda7 Sep 5 00:13:01.504691 extend-filesystems[1432]: Found vda9 Sep 5 00:13:01.504691 extend-filesystems[1432]: Checking size of /dev/vda9 Sep 5 00:13:01.557500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1389) Sep 5 00:13:01.557616 extend-filesystems[1432]: Resized partition /dev/vda9 Sep 5 00:13:01.510055 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:13:01.509551 dbus-daemon[1430]: [system] SELinux support is enabled Sep 5 00:13:01.559192 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:13:01.511748 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:13:01.512684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:13:01.513694 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:13:01.517135 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:13:01.560763 jq[1448]: true Sep 5 00:13:01.520090 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:13:01.527175 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:13:01.530553 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:13:01.561228 jq[1453]: true Sep 5 00:13:01.530854 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:13:01.532729 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:13:01.534229 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:13:01.550388 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:13:01.573956 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:13:01.568597 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:13:01.568956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:13:01.576299 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:13:01.576664 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:13:01.578815 systemd-logind[1441]: New seat seat0. Sep 5 00:13:01.580324 update_engine[1447]: I20250905 00:13:01.580235 1447 main.cc:92] Flatcar Update Engine starting Sep 5 00:13:01.582867 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:13:01.584870 update_engine[1447]: I20250905 00:13:01.583951 1447 update_check_scheduler.cc:74] Next update check in 6m45s Sep 5 00:13:01.593311 tar[1452]: linux-amd64/LICENSE Sep 5 00:13:01.593560 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:13:01.595554 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:13:01.595679 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:13:01.597925 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:13:01.598068 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:13:01.600345 tar[1452]: linux-amd64/helm Sep 5 00:13:01.601821 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:13:01.605037 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:13:01.625688 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:13:01.625688 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:13:01.625688 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:13:01.629895 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Sep 5 00:13:01.630182 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:13:01.630434 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:13:01.642218 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:13:01.643708 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:13:01.647102 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:13:01.665245 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:13:01.767320 containerd[1459]: time="2025-09-05T00:13:01.767114748Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:13:01.791652 containerd[1459]: time="2025-09-05T00:13:01.791604256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.793502 containerd[1459]: time="2025-09-05T00:13:01.793462890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:13:01.793502 containerd[1459]: time="2025-09-05T00:13:01.793493408Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:13:01.793587 containerd[1459]: time="2025-09-05T00:13:01.793508055Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:13:01.793727 containerd[1459]: time="2025-09-05T00:13:01.793694515Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:13:01.793727 containerd[1459]: time="2025-09-05T00:13:01.793719011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.793819 containerd[1459]: time="2025-09-05T00:13:01.793800033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:13:01.793851 containerd[1459]: time="2025-09-05T00:13:01.793817095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794041 containerd[1459]: time="2025-09-05T00:13:01.794019424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794041 containerd[1459]: time="2025-09-05T00:13:01.794037598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794092 containerd[1459]: time="2025-09-05T00:13:01.794050292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794092 containerd[1459]: time="2025-09-05T00:13:01.794072704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794203 containerd[1459]: time="2025-09-05T00:13:01.794184403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794474 containerd[1459]: time="2025-09-05T00:13:01.794446685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794602 containerd[1459]: time="2025-09-05T00:13:01.794576989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:13:01.794602 containerd[1459]: time="2025-09-05T00:13:01.794594522Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:13:01.794717 containerd[1459]: time="2025-09-05T00:13:01.794698006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:13:01.794787 containerd[1459]: time="2025-09-05T00:13:01.794760544Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:13:01.916422 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:13:01.941336 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:13:01.949062 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:13:01.957097 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:13:01.957388 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:13:01.964205 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:13:01.978424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:13:01.993062 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:13:01.995187 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:13:01.996436 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:13:02.010882 containerd[1459]: time="2025-09-05T00:13:02.010836538Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:13:02.010960 containerd[1459]: time="2025-09-05T00:13:02.010904814Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:13:02.010960 containerd[1459]: time="2025-09-05T00:13:02.010922591Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:13:02.010960 containerd[1459]: time="2025-09-05T00:13:02.010937728Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:13:02.010960 containerd[1459]: time="2025-09-05T00:13:02.010951852Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:13:02.011153 containerd[1459]: time="2025-09-05T00:13:02.011133444Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:13:02.011418 containerd[1459]: time="2025-09-05T00:13:02.011392256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:13:02.011532 containerd[1459]: time="2025-09-05T00:13:02.011512133Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:13:02.011571 containerd[1459]: time="2025-09-05T00:13:02.011530719Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:13:02.011571 containerd[1459]: time="2025-09-05T00:13:02.011543113Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:13:02.011571 containerd[1459]: time="2025-09-05T00:13:02.011556030Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013121 containerd[1459]: time="2025-09-05T00:13:02.013038713Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013121 containerd[1459]: time="2025-09-05T00:13:02.013103826Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013121 containerd[1459]: time="2025-09-05T00:13:02.013122381Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013200 containerd[1459]: time="2025-09-05T00:13:02.013142994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013200 containerd[1459]: time="2025-09-05T00:13:02.013159041Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013200 containerd[1459]: time="2025-09-05T00:13:02.013175928Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013200 containerd[1459]: time="2025-09-05T00:13:02.013189254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:13:02.013280 containerd[1459]: time="2025-09-05T00:13:02.013211739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013280 containerd[1459]: time="2025-09-05T00:13:02.013229393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013280 containerd[1459]: time="2025-09-05T00:13:02.013242689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013280 containerd[1459]: time="2025-09-05T00:13:02.013257293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013280 containerd[1459]: time="2025-09-05T00:13:02.013271979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013285407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013298835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013312028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013336662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013355709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013368195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013378 containerd[1459]: time="2025-09-05T00:13:02.013380261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013512 containerd[1459]: time="2025-09-05T00:13:02.013394559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013512 containerd[1459]: time="2025-09-05T00:13:02.013411476Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:13:02.013512 containerd[1459]: time="2025-09-05T00:13:02.013450020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013512 containerd[1459]: time="2025-09-05T00:13:02.013462056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013512 containerd[1459]: time="2025-09-05T00:13:02.013486251Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:13:02.013604 containerd[1459]: time="2025-09-05T00:13:02.013542776Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:13:02.013604 containerd[1459]: time="2025-09-05T00:13:02.013563072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:13:02.013604 containerd[1459]: time="2025-09-05T00:13:02.013574708Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:13:02.013604 containerd[1459]: time="2025-09-05T00:13:02.013586683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:13:02.013604 containerd[1459]: time="2025-09-05T00:13:02.013596212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.013704 containerd[1459]: time="2025-09-05T00:13:02.013609720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:13:02.013704 containerd[1459]: time="2025-09-05T00:13:02.013678078Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:13:02.013704 containerd[1459]: time="2025-09-05T00:13:02.013690359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:13:02.014208 containerd[1459]: time="2025-09-05T00:13:02.014149411Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:13:02.014208 containerd[1459]: time="2025-09-05T00:13:02.014215505Z" level=info msg="Connect containerd service" Sep 5 00:13:02.014377 containerd[1459]: time="2025-09-05T00:13:02.014257130Z" level=info msg="using legacy CRI server" Sep 5 00:13:02.014377 containerd[1459]: time="2025-09-05T00:13:02.014264765Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:13:02.014377 containerd[1459]: time="2025-09-05T00:13:02.014359956Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:13:02.015065 containerd[1459]: time="2025-09-05T00:13:02.015039431Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:13:02.015253 containerd[1459]: time="2025-09-05T00:13:02.015189192Z" level=info msg="Start subscribing containerd event" Sep 5 00:13:02.015295 containerd[1459]: time="2025-09-05T00:13:02.015275071Z" level=info msg="Start recovering state" Sep 5 00:13:02.015383 containerd[1459]: time="2025-09-05T00:13:02.015364880Z" level=info msg="Start event monitor" Sep 5 00:13:02.015429 containerd[1459]: time="2025-09-05T00:13:02.015368646Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:13:02.015429 containerd[1459]: time="2025-09-05T00:13:02.015413730Z" level=info msg="Start snapshots syncer" Sep 5 00:13:02.015429 containerd[1459]: time="2025-09-05T00:13:02.015427444Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:13:02.015499 containerd[1459]: time="2025-09-05T00:13:02.015435815Z" level=info msg="Start streaming server" Sep 5 00:13:02.015499 containerd[1459]: time="2025-09-05T00:13:02.015446367Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:13:02.015794 containerd[1459]: time="2025-09-05T00:13:02.015736620Z" level=info msg="containerd successfully booted in 0.249937s" Sep 5 00:13:02.015864 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:13:02.035494 tar[1452]: linux-amd64/README.md Sep 5 00:13:02.049022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:13:03.016128 systemd-networkd[1394]: eth0: Gained IPv6LL Sep 5 00:13:03.019422 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:13:03.088051 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:13:03.103051 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:13:03.105788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:03.108133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:13:03.128730 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:13:03.129007 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:13:03.130736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:13:03.133943 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:13:03.850744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:03.852877 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:13:03.854913 systemd[1]: Startup finished in 1.063s (kernel) + 6.559s (initrd) + 4.627s (userspace) = 12.250s. Sep 5 00:13:03.872143 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:13:04.282853 kubelet[1542]: E0905 00:13:04.282685 1542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:13:04.287637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:13:04.287903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:13:04.288278 systemd[1]: kubelet.service: Consumed 1.030s CPU time. Sep 5 00:13:06.400386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:13:06.401770 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:41390.service - OpenSSH per-connection server daemon (10.0.0.1:41390). Sep 5 00:13:06.443562 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 41390 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:06.445710 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:06.453998 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:13:06.462042 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:13:06.463948 systemd-logind[1441]: New session 1 of user core. Sep 5 00:13:06.475375 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:13:06.478353 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:13:06.532421 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:13:06.649161 systemd[1559]: Queued start job for default target default.target. Sep 5 00:13:06.663467 systemd[1559]: Created slice app.slice - User Application Slice. Sep 5 00:13:06.663498 systemd[1559]: Reached target paths.target - Paths. Sep 5 00:13:06.663515 systemd[1559]: Reached target timers.target - Timers. Sep 5 00:13:06.665301 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:13:06.677760 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:13:06.677913 systemd[1559]: Reached target sockets.target - Sockets. Sep 5 00:13:06.677933 systemd[1559]: Reached target basic.target - Basic System. Sep 5 00:13:06.677974 systemd[1559]: Reached target default.target - Main User Target. Sep 5 00:13:06.678010 systemd[1559]: Startup finished in 137ms. Sep 5 00:13:06.678619 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:13:06.680408 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:13:06.744818 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:41406.service - OpenSSH per-connection server daemon (10.0.0.1:41406). Sep 5 00:13:06.778057 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:06.779713 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:06.784221 systemd-logind[1441]: New session 2 of user core. Sep 5 00:13:06.793936 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:13:06.849640 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 5 00:13:06.865560 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:41406.service: Deactivated successfully. Sep 5 00:13:06.867479 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:13:06.869183 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:13:06.876055 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:41414.service - OpenSSH per-connection server daemon (10.0.0.1:41414). Sep 5 00:13:06.876952 systemd-logind[1441]: Removed session 2. Sep 5 00:13:06.904567 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 41414 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:06.906137 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:06.910022 systemd-logind[1441]: New session 3 of user core. Sep 5 00:13:06.919929 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:13:06.970625 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 5 00:13:06.981518 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:41414.service: Deactivated successfully. Sep 5 00:13:06.983394 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:13:06.985161 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:13:06.993039 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:41430.service - OpenSSH per-connection server daemon (10.0.0.1:41430). Sep 5 00:13:06.994159 systemd-logind[1441]: Removed session 3. Sep 5 00:13:07.023345 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 41430 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:07.028800 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:07.033020 systemd-logind[1441]: New session 4 of user core. Sep 5 00:13:07.043942 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:13:07.100340 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 5 00:13:07.107622 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:41430.service: Deactivated successfully. Sep 5 00:13:07.109612 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:13:07.111069 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:13:07.121054 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:41440.service - OpenSSH per-connection server daemon (10.0.0.1:41440). Sep 5 00:13:07.121958 systemd-logind[1441]: Removed session 4. Sep 5 00:13:07.147761 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 41440 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:07.149312 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:07.155944 systemd-logind[1441]: New session 5 of user core. Sep 5 00:13:07.165924 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:13:07.226289 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:13:07.226677 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:13:07.247262 sudo[1594]: pam_unix(sudo:session): session closed for user root Sep 5 00:13:07.249616 sshd[1591]: pam_unix(sshd:session): session closed for user core Sep 5 00:13:07.263081 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:41440.service: Deactivated successfully. Sep 5 00:13:07.265081 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:13:07.267217 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:13:07.281165 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Sep 5 00:13:07.282343 systemd-logind[1441]: Removed session 5. Sep 5 00:13:07.311906 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:07.313820 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:07.318221 systemd-logind[1441]: New session 6 of user core. Sep 5 00:13:07.329930 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:13:07.393893 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:13:07.394350 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:13:07.400585 sudo[1603]: pam_unix(sudo:session): session closed for user root Sep 5 00:13:07.411699 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:13:07.412191 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:13:07.435202 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:13:07.438308 auditctl[1606]: No rules Sep 5 00:13:07.438906 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:13:07.439231 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:13:07.445199 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:13:07.494739 augenrules[1624]: No rules Sep 5 00:13:07.500464 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:13:07.502771 sudo[1602]: pam_unix(sudo:session): session closed for user root Sep 5 00:13:07.509120 sshd[1599]: pam_unix(sshd:session): session closed for user core Sep 5 00:13:07.516685 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:41450.service: Deactivated successfully. Sep 5 00:13:07.519108 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:13:07.523723 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:13:07.534241 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:41460.service - OpenSSH per-connection server daemon (10.0.0.1:41460). Sep 5 00:13:07.536540 systemd-logind[1441]: Removed session 6. Sep 5 00:13:07.574780 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 41460 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:13:07.577316 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:13:07.593612 systemd-logind[1441]: New session 7 of user core. Sep 5 00:13:07.611261 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:13:07.671675 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:13:07.672142 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:13:09.120372 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:13:09.120447 (dockerd)[1653]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:13:10.070278 dockerd[1653]: time="2025-09-05T00:13:10.070146674Z" level=info msg="Starting up" Sep 5 00:13:11.298179 dockerd[1653]: time="2025-09-05T00:13:11.298113938Z" level=info msg="Loading containers: start." Sep 5 00:13:11.489869 kernel: Initializing XFRM netlink socket Sep 5 00:13:11.723182 systemd-networkd[1394]: docker0: Link UP Sep 5 00:13:11.758703 dockerd[1653]: time="2025-09-05T00:13:11.758602759Z" level=info msg="Loading containers: done." Sep 5 00:13:11.793747 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3785231121-merged.mount: Deactivated successfully. Sep 5 00:13:11.814023 dockerd[1653]: time="2025-09-05T00:13:11.811606860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:13:11.814023 dockerd[1653]: time="2025-09-05T00:13:11.811816691Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:13:11.814023 dockerd[1653]: time="2025-09-05T00:13:11.812025282Z" level=info msg="Daemon has completed initialization" Sep 5 00:13:11.932110 dockerd[1653]: time="2025-09-05T00:13:11.931493000Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:13:11.931677 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:13:13.526912 containerd[1459]: time="2025-09-05T00:13:13.526846918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 00:13:14.141930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297035596.mount: Deactivated successfully. Sep 5 00:13:14.538252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:13:14.546159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:14.972970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:14.973152 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:13:15.304532 kubelet[1825]: E0905 00:13:15.304356 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:13:15.313297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:13:15.313520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:13:15.993662 containerd[1459]: time="2025-09-05T00:13:15.993589340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:15.994377 containerd[1459]: time="2025-09-05T00:13:15.994326515Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 5 00:13:15.995791 containerd[1459]: time="2025-09-05T00:13:15.995733041Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:15.999551 containerd[1459]: time="2025-09-05T00:13:15.999474600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:16.001023 containerd[1459]: time="2025-09-05T00:13:16.000968671Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.474063136s" Sep 5 00:13:16.001084 containerd[1459]: time="2025-09-05T00:13:16.001020554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 5 00:13:16.001829 containerd[1459]: time="2025-09-05T00:13:16.001786657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 00:13:19.585272 containerd[1459]: time="2025-09-05T00:13:19.585186040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:19.586395 containerd[1459]: time="2025-09-05T00:13:19.586295840Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 5 00:13:19.587560 containerd[1459]: time="2025-09-05T00:13:19.587491120Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:19.590505 containerd[1459]: time="2025-09-05T00:13:19.590448549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:19.591670 containerd[1459]: time="2025-09-05T00:13:19.591613204Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 3.589788099s" Sep 5 00:13:19.591670 containerd[1459]: time="2025-09-05T00:13:19.591655697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 5 00:13:19.592215 containerd[1459]: time="2025-09-05T00:13:19.592181599Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 00:13:22.523146 containerd[1459]: time="2025-09-05T00:13:22.523059591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:22.539399 containerd[1459]: time="2025-09-05T00:13:22.539313565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 5 00:13:22.549903 containerd[1459]: time="2025-09-05T00:13:22.549845680Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:22.576533 containerd[1459]: time="2025-09-05T00:13:22.576498472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:22.577586 containerd[1459]: time="2025-09-05T00:13:22.577558233Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.985347731s" Sep 5 00:13:22.577650 containerd[1459]: time="2025-09-05T00:13:22.577590843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 5 00:13:22.578442 containerd[1459]: time="2025-09-05T00:13:22.578418393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 00:13:24.612388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337079549.mount: Deactivated successfully. Sep 5 00:13:25.564002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:13:25.573942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:25.751169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:25.755666 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:13:25.928248 kubelet[1897]: E0905 00:13:25.928114 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:13:25.932586 containerd[1459]: time="2025-09-05T00:13:25.932526834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:25.933134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:13:25.933403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:13:25.933638 containerd[1459]: time="2025-09-05T00:13:25.933487402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 5 00:13:25.934616 containerd[1459]: time="2025-09-05T00:13:25.934588874Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:25.937122 containerd[1459]: time="2025-09-05T00:13:25.937070517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:25.937565 containerd[1459]: time="2025-09-05T00:13:25.937535944Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 3.359089982s" Sep 5 00:13:25.937598 containerd[1459]: time="2025-09-05T00:13:25.937564125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 5 00:13:25.938040 containerd[1459]: time="2025-09-05T00:13:25.938018599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:13:26.552138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366123781.mount: Deactivated successfully. Sep 5 00:13:27.615117 containerd[1459]: time="2025-09-05T00:13:27.615032356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:27.615756 containerd[1459]: time="2025-09-05T00:13:27.615711236Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:13:27.616870 containerd[1459]: time="2025-09-05T00:13:27.616840054Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:27.619543 containerd[1459]: time="2025-09-05T00:13:27.619500520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:27.620971 containerd[1459]: time="2025-09-05T00:13:27.620944730Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.682894243s" Sep 5 00:13:27.621012 containerd[1459]: time="2025-09-05T00:13:27.620977076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:13:27.621454 containerd[1459]: time="2025-09-05T00:13:27.621434722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:13:28.166656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81400463.mount: Deactivated successfully. Sep 5 00:13:28.191230 containerd[1459]: time="2025-09-05T00:13:28.191150457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:28.192012 containerd[1459]: time="2025-09-05T00:13:28.191953259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:13:28.193265 containerd[1459]: time="2025-09-05T00:13:28.193220550Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:28.195538 containerd[1459]: time="2025-09-05T00:13:28.195493167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:28.196338 containerd[1459]: time="2025-09-05T00:13:28.196301242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 574.838457ms" Sep 5 00:13:28.196398 containerd[1459]: time="2025-09-05T00:13:28.196339569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:13:28.196980 containerd[1459]: time="2025-09-05T00:13:28.196953861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 00:13:28.743696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508331751.mount: Deactivated successfully. Sep 5 00:13:31.536203 containerd[1459]: time="2025-09-05T00:13:31.536121647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:31.536899 containerd[1459]: time="2025-09-05T00:13:31.536820049Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 5 00:13:31.538154 containerd[1459]: time="2025-09-05T00:13:31.538111869Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:31.541578 containerd[1459]: time="2025-09-05T00:13:31.541512546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:31.542755 containerd[1459]: time="2025-09-05T00:13:31.542716913Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.345729188s" Sep 5 00:13:31.542811 containerd[1459]: time="2025-09-05T00:13:31.542760454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 5 00:13:34.900230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:34.916367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:34.947215 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Sep 5 00:13:34.947233 systemd[1]: Reloading... Sep 5 00:13:35.040446 zram_generator::config[2088]: No configuration found. Sep 5 00:13:35.275134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:13:35.353626 systemd[1]: Reloading finished in 405 ms. Sep 5 00:13:35.411431 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:13:35.411532 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:13:35.411851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:35.413699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:35.594665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:35.599653 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:13:35.645302 kubelet[2133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:13:35.645302 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:13:35.645302 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:13:35.645699 kubelet[2133]: I0905 00:13:35.645370 2133 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:13:36.266473 kubelet[2133]: I0905 00:13:36.266423 2133 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:13:36.266473 kubelet[2133]: I0905 00:13:36.266459 2133 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:13:36.266757 kubelet[2133]: I0905 00:13:36.266740 2133 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:13:36.295320 kubelet[2133]: E0905 00:13:36.294387 2133 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:36.300647 kubelet[2133]: I0905 00:13:36.300588 2133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:13:36.310539 kubelet[2133]: E0905 00:13:36.310483 2133 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:13:36.310539 kubelet[2133]: I0905 00:13:36.310527 2133 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:13:36.316498 kubelet[2133]: I0905 00:13:36.316465 2133 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:13:36.316741 kubelet[2133]: I0905 00:13:36.316698 2133 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:13:36.316927 kubelet[2133]: I0905 00:13:36.316732 2133 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:13:36.317429 kubelet[2133]: I0905 00:13:36.317396 2133 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:13:36.317429 kubelet[2133]: I0905 00:13:36.317415 2133 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:13:36.317598 kubelet[2133]: I0905 00:13:36.317571 2133 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:13:36.320553 kubelet[2133]: I0905 00:13:36.320512 2133 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:13:36.320553 kubelet[2133]: I0905 00:13:36.320541 2133 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:13:36.320629 kubelet[2133]: I0905 00:13:36.320559 2133 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:13:36.320629 kubelet[2133]: I0905 00:13:36.320570 2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:13:36.324797 kubelet[2133]: I0905 00:13:36.324748 2133 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:13:36.325163 kubelet[2133]: I0905 00:13:36.325136 2133 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:13:36.325228 kubelet[2133]: W0905 00:13:36.325208 2133 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:13:36.325980 kubelet[2133]: W0905 00:13:36.325865 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:36.325980 kubelet[2133]: E0905 00:13:36.325916 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:36.326744 kubelet[2133]: W0905 00:13:36.326701 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:36.326813 kubelet[2133]: E0905 00:13:36.326759 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:36.327763 kubelet[2133]: I0905 00:13:36.327734 2133 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:13:36.327828 kubelet[2133]: I0905 00:13:36.327787 2133 server.go:1287] "Started kubelet" Sep 5 00:13:36.329519 kubelet[2133]: I0905 00:13:36.328876 2133 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:13:36.329519 kubelet[2133]: I0905 00:13:36.329112 2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:13:36.336691 kubelet[2133]: I0905 00:13:36.334367 2133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:13:36.336691 kubelet[2133]: I0905 00:13:36.334837 2133 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:13:36.336691 kubelet[2133]: I0905 00:13:36.335545 2133 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:13:36.339114 kubelet[2133]: I0905 00:13:36.337470 2133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:13:36.339114 kubelet[2133]: E0905 00:13:36.336496 2133 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a95047ced5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:13:36.327748956 +0000 UTC m=+0.722792092,LastTimestamp:2025-09-05 00:13:36.327748956 +0000 UTC m=+0.722792092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:13:36.339114 kubelet[2133]: E0905 00:13:36.338169 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:36.339114 kubelet[2133]: I0905 00:13:36.338194 2133 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:13:36.339114 kubelet[2133]: I0905 00:13:36.338328 2133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:13:36.339114 kubelet[2133]: I0905 00:13:36.338540 2133 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:13:36.339114 kubelet[2133]: W0905 00:13:36.338827 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:36.339468 kubelet[2133]: E0905 00:13:36.338881 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:36.339468 kubelet[2133]: E0905 00:13:36.339070 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Sep 5 00:13:36.341706 kubelet[2133]: I0905 00:13:36.341665 2133 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:13:36.341706 kubelet[2133]: E0905 00:13:36.341674 2133 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:13:36.341706 kubelet[2133]: I0905 00:13:36.341684 2133 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:13:36.341961 kubelet[2133]: I0905 00:13:36.341912 2133 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:13:36.360978 kubelet[2133]: I0905 00:13:36.360921 2133 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:13:36.360978 kubelet[2133]: I0905 00:13:36.360949 2133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:13:36.360978 kubelet[2133]: I0905 00:13:36.360970 2133 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:13:36.365950 kubelet[2133]: I0905 00:13:36.365752 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:13:36.367606 kubelet[2133]: I0905 00:13:36.367573 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:13:36.367606 kubelet[2133]: I0905 00:13:36.367604 2133 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:13:36.367799 kubelet[2133]: I0905 00:13:36.367627 2133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:13:36.367799 kubelet[2133]: I0905 00:13:36.367637 2133 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:13:36.367799 kubelet[2133]: E0905 00:13:36.367694 2133 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:13:36.438633 kubelet[2133]: E0905 00:13:36.438582 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:36.467950 kubelet[2133]: E0905 00:13:36.467908 2133 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:13:36.539433 kubelet[2133]: E0905 00:13:36.539280 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:36.539726 kubelet[2133]: E0905 00:13:36.539601 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Sep 5 00:13:36.640299 kubelet[2133]: E0905 00:13:36.640241 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:36.668470 kubelet[2133]: E0905 00:13:36.668416 2133 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:13:36.740931 kubelet[2133]: E0905 00:13:36.740858 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:36.842003 kubelet[2133]: E0905 00:13:36.841868 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:36.940987 kubelet[2133]: E0905 00:13:36.940924 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Sep 5 00:13:36.942939 kubelet[2133]: E0905 00:13:36.942876 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.043500 kubelet[2133]: E0905 00:13:37.043408 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.068551 kubelet[2133]: E0905 00:13:37.068498 2133 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:13:37.144111 kubelet[2133]: E0905 00:13:37.143963 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.244432 kubelet[2133]: E0905 00:13:37.244381 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.344760 kubelet[2133]: E0905 00:13:37.344693 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.445246 kubelet[2133]: E0905 00:13:37.445195 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.545809 kubelet[2133]: E0905 00:13:37.545723 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.559446 kubelet[2133]: W0905 00:13:37.559374 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:37.559446 kubelet[2133]: E0905 00:13:37.559436 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:37.646134 kubelet[2133]: E0905 00:13:37.646056 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.688858 kubelet[2133]: E0905 00:13:37.688703 2133 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a95047ced5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:13:36.327748956 +0000 UTC m=+0.722792092,LastTimestamp:2025-09-05 00:13:36.327748956 +0000 UTC m=+0.722792092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:13:37.742469 kubelet[2133]: E0905 00:13:37.742331 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Sep 5 00:13:37.746516 kubelet[2133]: E0905 00:13:37.746456 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.806404 kubelet[2133]: W0905 00:13:37.806328 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:37.806519 kubelet[2133]: E0905 00:13:37.806418 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:37.847371 kubelet[2133]: E0905 00:13:37.847282 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:37.869545 kubelet[2133]: E0905 00:13:37.869482 2133 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:13:37.893339 kubelet[2133]: W0905 00:13:37.893270 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:37.893387 kubelet[2133]: E0905 00:13:37.893358 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:37.948110 kubelet[2133]: E0905 00:13:37.948036 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:38.048136 kubelet[2133]: W0905 00:13:38.047949 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:38.048136 kubelet[2133]: E0905 00:13:38.048038 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:38.048958 kubelet[2133]: E0905 00:13:38.048851 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:38.115931 kubelet[2133]: I0905 00:13:38.115866 2133 policy_none.go:49] "None policy: Start" Sep 5 00:13:38.115931 kubelet[2133]: I0905 00:13:38.115913 2133 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:13:38.115931 kubelet[2133]: I0905 00:13:38.115932 2133 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:13:38.131070 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:13:38.148384 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:13:38.149193 kubelet[2133]: E0905 00:13:38.149164 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:38.152001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:13:38.163753 kubelet[2133]: I0905 00:13:38.163707 2133 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:13:38.164014 kubelet[2133]: I0905 00:13:38.163996 2133 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:13:38.164060 kubelet[2133]: I0905 00:13:38.164011 2133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:13:38.164311 kubelet[2133]: I0905 00:13:38.164290 2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:13:38.167481 kubelet[2133]: E0905 00:13:38.165919 2133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:13:38.167481 kubelet[2133]: E0905 00:13:38.165976 2133 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:13:38.265513 kubelet[2133]: I0905 00:13:38.265471 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:38.265901 kubelet[2133]: E0905 00:13:38.265876 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Sep 5 00:13:38.465756 kubelet[2133]: E0905 00:13:38.465701 2133 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:38.467727 kubelet[2133]: I0905 00:13:38.467700 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:38.467988 kubelet[2133]: E0905 00:13:38.467966 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Sep 5 00:13:38.870236 kubelet[2133]: I0905 00:13:38.870088 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:38.870830 kubelet[2133]: E0905 00:13:38.870498 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Sep 5 00:13:39.342711 kubelet[2133]: E0905 00:13:39.342651 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="3.2s" Sep 5 00:13:39.478808 systemd[1]: Created slice kubepods-burstable-podd2a2b701841b040afbd96c0fadcb75c8.slice - libcontainer container kubepods-burstable-podd2a2b701841b040afbd96c0fadcb75c8.slice. Sep 5 00:13:39.486601 kubelet[2133]: E0905 00:13:39.486565 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:39.488685 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 5 00:13:39.499957 kubelet[2133]: E0905 00:13:39.499920 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:39.501445 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 5 00:13:39.502946 kubelet[2133]: E0905 00:13:39.502918 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:39.557509 kubelet[2133]: I0905 00:13:39.557440 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:39.557509 kubelet[2133]: I0905 00:13:39.557501 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:13:39.557642 kubelet[2133]: I0905 00:13:39.557524 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2a2b701841b040afbd96c0fadcb75c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a2b701841b040afbd96c0fadcb75c8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:39.557642 kubelet[2133]: I0905 00:13:39.557542 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2a2b701841b040afbd96c0fadcb75c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a2b701841b040afbd96c0fadcb75c8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:39.557642 kubelet[2133]: I0905 00:13:39.557618 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:39.557642 kubelet[2133]: I0905 00:13:39.557634 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:39.557809 kubelet[2133]: I0905 00:13:39.557654 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:39.557809 kubelet[2133]: I0905 00:13:39.557673 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:39.557809 kubelet[2133]: I0905 00:13:39.557690 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2a2b701841b040afbd96c0fadcb75c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d2a2b701841b040afbd96c0fadcb75c8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:39.557937 kubelet[2133]: W0905 00:13:39.557919 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:39.557977 kubelet[2133]: E0905 00:13:39.557959 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:39.672137 kubelet[2133]: I0905 00:13:39.672100 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:39.672566 kubelet[2133]: E0905 00:13:39.672525 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Sep 5 00:13:39.787423 kubelet[2133]: E0905 00:13:39.787376 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:39.788237 containerd[1459]: time="2025-09-05T00:13:39.788195781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d2a2b701841b040afbd96c0fadcb75c8,Namespace:kube-system,Attempt:0,}" Sep 5 00:13:39.800676 kubelet[2133]: E0905 00:13:39.800632 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:39.801105 containerd[1459]: time="2025-09-05T00:13:39.801060386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 5 00:13:39.803286 kubelet[2133]: E0905 00:13:39.803256 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:39.803581 containerd[1459]: time="2025-09-05T00:13:39.803549901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 5 00:13:40.066213 kubelet[2133]: W0905 00:13:40.066058 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:40.066213 kubelet[2133]: E0905 00:13:40.066101 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:40.415269 kubelet[2133]: W0905 00:13:40.415213 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:40.415269 kubelet[2133]: E0905 00:13:40.415267 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:40.474753 kubelet[2133]: W0905 00:13:40.474693 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:40.474753 kubelet[2133]: E0905 00:13:40.474741 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:41.097609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223977280.mount: Deactivated successfully. Sep 5 00:13:41.125188 containerd[1459]: time="2025-09-05T00:13:41.125100697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:13:41.126246 containerd[1459]: time="2025-09-05T00:13:41.126209390Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:13:41.134286 containerd[1459]: time="2025-09-05T00:13:41.134243374Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:13:41.142935 containerd[1459]: time="2025-09-05T00:13:41.142869095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:13:41.151229 containerd[1459]: time="2025-09-05T00:13:41.151178663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:13:41.159723 containerd[1459]: time="2025-09-05T00:13:41.159685782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:13:41.168245 containerd[1459]: time="2025-09-05T00:13:41.168206479Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:13:41.176413 containerd[1459]: time="2025-09-05T00:13:41.176358884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:13:41.177248 containerd[1459]: time="2025-09-05T00:13:41.177198156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.376064312s" Sep 5 00:13:41.184633 containerd[1459]: time="2025-09-05T00:13:41.184603656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.381007676s" Sep 5 00:13:41.185293 containerd[1459]: time="2025-09-05T00:13:41.185260931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.396987083s" Sep 5 00:13:41.274389 kubelet[2133]: I0905 00:13:41.274343 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:41.274814 kubelet[2133]: E0905 00:13:41.274714 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Sep 5 00:13:41.378257 containerd[1459]: time="2025-09-05T00:13:41.377917448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:41.378257 containerd[1459]: time="2025-09-05T00:13:41.377974460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:41.378257 containerd[1459]: time="2025-09-05T00:13:41.377987958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:41.378257 containerd[1459]: time="2025-09-05T00:13:41.378063038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:41.383087 containerd[1459]: time="2025-09-05T00:13:41.382816668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:41.383087 containerd[1459]: time="2025-09-05T00:13:41.382867366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:41.383087 containerd[1459]: time="2025-09-05T00:13:41.382881246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:41.383087 containerd[1459]: time="2025-09-05T00:13:41.382956034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:41.384173 containerd[1459]: time="2025-09-05T00:13:41.384108370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:41.384236 containerd[1459]: time="2025-09-05T00:13:41.384188270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:41.384286 containerd[1459]: time="2025-09-05T00:13:41.384228335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:41.384456 containerd[1459]: time="2025-09-05T00:13:41.384406553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:41.404977 systemd[1]: Started cri-containerd-130f36502ab36f0db0b6afb4702a128315eecce52b7ff8ddd054c0ed0f4b7d1e.scope - libcontainer container 130f36502ab36f0db0b6afb4702a128315eecce52b7ff8ddd054c0ed0f4b7d1e. Sep 5 00:13:41.409937 systemd[1]: Started cri-containerd-15e92b4f621278421ea17fb8c1ab2b08a467dd167a74f79abd737ff34a0ab185.scope - libcontainer container 15e92b4f621278421ea17fb8c1ab2b08a467dd167a74f79abd737ff34a0ab185. Sep 5 00:13:41.412426 systemd[1]: Started cri-containerd-28c05dcd8b0d1a3a807ced1e1904edaa9c4919e28ba951733fe5accd0cc1d5c4.scope - libcontainer container 28c05dcd8b0d1a3a807ced1e1904edaa9c4919e28ba951733fe5accd0cc1d5c4. Sep 5 00:13:41.458999 containerd[1459]: time="2025-09-05T00:13:41.458949207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"130f36502ab36f0db0b6afb4702a128315eecce52b7ff8ddd054c0ed0f4b7d1e\"" Sep 5 00:13:41.460477 kubelet[2133]: E0905 00:13:41.460285 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:41.463039 containerd[1459]: time="2025-09-05T00:13:41.462744844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d2a2b701841b040afbd96c0fadcb75c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"28c05dcd8b0d1a3a807ced1e1904edaa9c4919e28ba951733fe5accd0cc1d5c4\"" Sep 5 00:13:41.463944 containerd[1459]: time="2025-09-05T00:13:41.463906569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"15e92b4f621278421ea17fb8c1ab2b08a467dd167a74f79abd737ff34a0ab185\"" Sep 5 00:13:41.464004 kubelet[2133]: E0905 00:13:41.463971 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:41.465007 kubelet[2133]: E0905 00:13:41.464950 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:41.465748 containerd[1459]: time="2025-09-05T00:13:41.465721973Z" level=info msg="CreateContainer within sandbox \"130f36502ab36f0db0b6afb4702a128315eecce52b7ff8ddd054c0ed0f4b7d1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:13:41.465981 containerd[1459]: time="2025-09-05T00:13:41.465934495Z" level=info msg="CreateContainer within sandbox \"28c05dcd8b0d1a3a807ced1e1904edaa9c4919e28ba951733fe5accd0cc1d5c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:13:41.466975 containerd[1459]: time="2025-09-05T00:13:41.466957276Z" level=info msg="CreateContainer within sandbox \"15e92b4f621278421ea17fb8c1ab2b08a467dd167a74f79abd737ff34a0ab185\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:13:41.569955 kubelet[2133]: W0905 00:13:41.569918 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Sep 5 00:13:41.569955 kubelet[2133]: E0905 00:13:41.569958 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:13:41.577623 containerd[1459]: time="2025-09-05T00:13:41.577554343Z" level=info msg="CreateContainer within sandbox \"28c05dcd8b0d1a3a807ced1e1904edaa9c4919e28ba951733fe5accd0cc1d5c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12e604433f09581f3d6b7b5acb42d8e28ac72053cac5cdeba4239b6e5f066210\"" Sep 5 00:13:41.578119 containerd[1459]: time="2025-09-05T00:13:41.578087665Z" level=info msg="StartContainer for \"12e604433f09581f3d6b7b5acb42d8e28ac72053cac5cdeba4239b6e5f066210\"" Sep 5 00:13:41.610036 systemd[1]: Started cri-containerd-12e604433f09581f3d6b7b5acb42d8e28ac72053cac5cdeba4239b6e5f066210.scope - libcontainer container 12e604433f09581f3d6b7b5acb42d8e28ac72053cac5cdeba4239b6e5f066210. Sep 5 00:13:41.633699 containerd[1459]: time="2025-09-05T00:13:41.633572284Z" level=info msg="CreateContainer within sandbox \"130f36502ab36f0db0b6afb4702a128315eecce52b7ff8ddd054c0ed0f4b7d1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"788830d96614c429a606cba06ebdd051d2411a46b0dd1e4f8f3a4eb4614b7fb3\"" Sep 5 00:13:41.634457 containerd[1459]: time="2025-09-05T00:13:41.634422128Z" level=info msg="StartContainer for \"788830d96614c429a606cba06ebdd051d2411a46b0dd1e4f8f3a4eb4614b7fb3\"" Sep 5 00:13:41.664018 systemd[1]: Started cri-containerd-788830d96614c429a606cba06ebdd051d2411a46b0dd1e4f8f3a4eb4614b7fb3.scope - libcontainer container 788830d96614c429a606cba06ebdd051d2411a46b0dd1e4f8f3a4eb4614b7fb3. Sep 5 00:13:41.665583 containerd[1459]: time="2025-09-05T00:13:41.665519328Z" level=info msg="StartContainer for \"12e604433f09581f3d6b7b5acb42d8e28ac72053cac5cdeba4239b6e5f066210\" returns successfully" Sep 5 00:13:41.667252 containerd[1459]: time="2025-09-05T00:13:41.667221793Z" level=info msg="CreateContainer within sandbox \"15e92b4f621278421ea17fb8c1ab2b08a467dd167a74f79abd737ff34a0ab185\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8f2906b017ab527a6f5b11319686e16d8e37ec932fa8632c2d3ef6aa39d58d0\"" Sep 5 00:13:41.668711 containerd[1459]: time="2025-09-05T00:13:41.667696921Z" level=info msg="StartContainer for \"f8f2906b017ab527a6f5b11319686e16d8e37ec932fa8632c2d3ef6aa39d58d0\"" Sep 5 00:13:41.700399 systemd[1]: Started cri-containerd-f8f2906b017ab527a6f5b11319686e16d8e37ec932fa8632c2d3ef6aa39d58d0.scope - libcontainer container f8f2906b017ab527a6f5b11319686e16d8e37ec932fa8632c2d3ef6aa39d58d0. Sep 5 00:13:41.725290 containerd[1459]: time="2025-09-05T00:13:41.724076300Z" level=info msg="StartContainer for \"788830d96614c429a606cba06ebdd051d2411a46b0dd1e4f8f3a4eb4614b7fb3\" returns successfully" Sep 5 00:13:41.763865 containerd[1459]: time="2025-09-05T00:13:41.763806515Z" level=info msg="StartContainer for \"f8f2906b017ab527a6f5b11319686e16d8e37ec932fa8632c2d3ef6aa39d58d0\" returns successfully" Sep 5 00:13:42.386860 kubelet[2133]: E0905 00:13:42.386806 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:42.387339 kubelet[2133]: E0905 00:13:42.386937 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:42.387761 kubelet[2133]: E0905 00:13:42.387719 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:42.387877 kubelet[2133]: E0905 00:13:42.387851 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:42.389657 kubelet[2133]: E0905 00:13:42.389621 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:42.389796 kubelet[2133]: E0905 00:13:42.389759 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:42.713529 kubelet[2133]: E0905 00:13:42.713248 2133 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:13:43.049821 kubelet[2133]: E0905 00:13:43.049660 2133 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 00:13:43.391444 kubelet[2133]: E0905 00:13:43.391344 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:43.391884 kubelet[2133]: E0905 00:13:43.391463 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:43.391884 kubelet[2133]: E0905 00:13:43.391486 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:43.391884 kubelet[2133]: E0905 00:13:43.391623 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:43.396083 kubelet[2133]: E0905 00:13:43.396049 2133 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 00:13:43.824659 kubelet[2133]: E0905 00:13:43.824609 2133 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 00:13:44.342858 kubelet[2133]: E0905 00:13:44.342816 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:44.343012 kubelet[2133]: E0905 00:13:44.342966 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:44.392398 kubelet[2133]: E0905 00:13:44.392352 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:13:44.392938 kubelet[2133]: E0905 00:13:44.392499 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:44.475997 kubelet[2133]: I0905 00:13:44.475957 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:44.482301 kubelet[2133]: I0905 00:13:44.482267 2133 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:13:44.482301 kubelet[2133]: E0905 00:13:44.482292 2133 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:13:44.492857 kubelet[2133]: E0905 00:13:44.492830 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:44.593457 kubelet[2133]: E0905 00:13:44.593331 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:44.693935 kubelet[2133]: E0905 00:13:44.693886 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:44.796890 kubelet[2133]: E0905 00:13:44.796835 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:44.898102 kubelet[2133]: E0905 00:13:44.897966 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:44.998595 kubelet[2133]: E0905 00:13:44.998551 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.099206 kubelet[2133]: E0905 00:13:45.099168 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.199886 kubelet[2133]: E0905 00:13:45.199833 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.300612 kubelet[2133]: E0905 00:13:45.300561 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.400876 kubelet[2133]: E0905 00:13:45.400839 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.501609 kubelet[2133]: E0905 00:13:45.501474 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.602098 kubelet[2133]: E0905 00:13:45.602060 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.626721 systemd[1]: Reloading requested from client PID 2413 ('systemctl') (unit session-7.scope)... Sep 5 00:13:45.626737 systemd[1]: Reloading... Sep 5 00:13:45.702561 kubelet[2133]: E0905 00:13:45.702518 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.705808 zram_generator::config[2455]: No configuration found. Sep 5 00:13:45.803737 kubelet[2133]: E0905 00:13:45.803637 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.863379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:13:45.904603 kubelet[2133]: E0905 00:13:45.904544 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:45.968581 systemd[1]: Reloading finished in 341 ms. Sep 5 00:13:46.005306 kubelet[2133]: E0905 00:13:46.005243 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:46.020242 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:46.037747 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:13:46.038113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:46.038172 systemd[1]: kubelet.service: Consumed 1.248s CPU time, 138.0M memory peak, 0B memory swap peak. Sep 5 00:13:46.048285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:13:46.222536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:13:46.227813 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:13:46.274340 kubelet[2497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:13:46.274340 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:13:46.274340 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:13:46.274892 kubelet[2497]: I0905 00:13:46.274379 2497 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:13:46.280717 kubelet[2497]: I0905 00:13:46.280683 2497 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:13:46.280717 kubelet[2497]: I0905 00:13:46.280706 2497 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:13:46.280942 kubelet[2497]: I0905 00:13:46.280918 2497 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:13:46.282016 kubelet[2497]: I0905 00:13:46.281993 2497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:13:46.284272 kubelet[2497]: I0905 00:13:46.284217 2497 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:13:46.288789 kubelet[2497]: E0905 00:13:46.288737 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:13:46.288789 kubelet[2497]: I0905 00:13:46.288785 2497 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:13:46.294359 kubelet[2497]: I0905 00:13:46.294339 2497 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:13:46.294634 kubelet[2497]: I0905 00:13:46.294590 2497 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:13:46.294823 kubelet[2497]: I0905 00:13:46.294623 2497 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:13:46.294905 kubelet[2497]: I0905 00:13:46.294825 2497 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:13:46.294905 kubelet[2497]: I0905 00:13:46.294841 2497 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:13:46.294905 kubelet[2497]: I0905 00:13:46.294898 2497 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:13:46.295070 kubelet[2497]: I0905 00:13:46.295052 2497 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:13:46.295099 kubelet[2497]: I0905 00:13:46.295079 2497 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:13:46.295099 kubelet[2497]: I0905 00:13:46.295096 2497 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:13:46.295144 kubelet[2497]: I0905 00:13:46.295106 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:13:46.296420 kubelet[2497]: I0905 00:13:46.296387 2497 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:13:46.296768 kubelet[2497]: I0905 00:13:46.296741 2497 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:13:46.297217 kubelet[2497]: I0905 00:13:46.297190 2497 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:13:46.297255 kubelet[2497]: I0905 00:13:46.297223 2497 server.go:1287] "Started kubelet" Sep 5 00:13:46.302002 kubelet[2497]: I0905 00:13:46.301169 2497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:13:46.302002 kubelet[2497]: I0905 00:13:46.301465 2497 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:13:46.302002 kubelet[2497]: I0905 00:13:46.301528 2497 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:13:46.303216 kubelet[2497]: I0905 00:13:46.302374 2497 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:13:46.304632 kubelet[2497]: I0905 00:13:46.304608 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:13:46.306208 kubelet[2497]: I0905 00:13:46.305647 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:13:46.308750 kubelet[2497]: E0905 00:13:46.308724 2497 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:13:46.308843 kubelet[2497]: I0905 00:13:46.308764 2497 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:13:46.309168 kubelet[2497]: I0905 00:13:46.309151 2497 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:13:46.309389 kubelet[2497]: I0905 00:13:46.309372 2497 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:13:46.310579 kubelet[2497]: I0905 00:13:46.310547 2497 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:13:46.310620 kubelet[2497]: E0905 00:13:46.310601 2497 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:13:46.311020 kubelet[2497]: I0905 00:13:46.310667 2497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:13:46.312529 kubelet[2497]: I0905 00:13:46.312503 2497 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:13:46.321635 kubelet[2497]: I0905 00:13:46.321448 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:13:46.323676 kubelet[2497]: I0905 00:13:46.323421 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:13:46.323676 kubelet[2497]: I0905 00:13:46.323449 2497 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:13:46.323676 kubelet[2497]: I0905 00:13:46.323474 2497 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:13:46.323676 kubelet[2497]: I0905 00:13:46.323492 2497 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:13:46.324375 kubelet[2497]: E0905 00:13:46.324317 2497 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:13:46.349422 kubelet[2497]: I0905 00:13:46.349387 2497 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:13:46.349422 kubelet[2497]: I0905 00:13:46.349407 2497 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:13:46.349422 kubelet[2497]: I0905 00:13:46.349436 2497 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:13:46.349645 kubelet[2497]: I0905 00:13:46.349626 2497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:13:46.349673 kubelet[2497]: I0905 00:13:46.349638 2497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:13:46.349673 kubelet[2497]: I0905 00:13:46.349656 2497 policy_none.go:49] "None policy: Start" Sep 5 00:13:46.349673 kubelet[2497]: I0905 00:13:46.349665 2497 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:13:46.349673 kubelet[2497]: I0905 00:13:46.349676 2497 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:13:46.349803 kubelet[2497]: I0905 00:13:46.349766 2497 state_mem.go:75] "Updated machine memory state" Sep 5 00:13:46.354307 kubelet[2497]: I0905 00:13:46.354266 2497 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:13:46.354510 kubelet[2497]: I0905 00:13:46.354452 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:13:46.354510 kubelet[2497]: I0905 00:13:46.354469 2497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:13:46.354724 kubelet[2497]: I0905 00:13:46.354707 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:13:46.356032 kubelet[2497]: E0905 00:13:46.355984 2497 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:13:46.425848 kubelet[2497]: I0905 00:13:46.425724 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:46.426013 kubelet[2497]: I0905 00:13:46.425726 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:13:46.427789 kubelet[2497]: I0905 00:13:46.426021 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:46.460297 kubelet[2497]: I0905 00:13:46.460238 2497 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:13:46.605329 update_engine[1447]: I20250905 00:13:46.605110 1447 update_attempter.cc:509] Updating boot flags... Sep 5 00:13:46.610199 kubelet[2497]: I0905 00:13:46.610145 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:13:46.610199 kubelet[2497]: I0905 00:13:46.610196 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2a2b701841b040afbd96c0fadcb75c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a2b701841b040afbd96c0fadcb75c8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:46.610294 kubelet[2497]: I0905 00:13:46.610220 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2a2b701841b040afbd96c0fadcb75c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a2b701841b040afbd96c0fadcb75c8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:46.610294 kubelet[2497]: I0905 00:13:46.610237 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:46.610294 kubelet[2497]: I0905 00:13:46.610252 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:46.610294 kubelet[2497]: I0905 00:13:46.610268 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:46.610294 kubelet[2497]: I0905 00:13:46.610283 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:46.610414 kubelet[2497]: I0905 00:13:46.610301 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2a2b701841b040afbd96c0fadcb75c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d2a2b701841b040afbd96c0fadcb75c8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:46.610414 kubelet[2497]: I0905 00:13:46.610316 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:13:46.696698 kubelet[2497]: I0905 00:13:46.696653 2497 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:13:46.696856 kubelet[2497]: I0905 00:13:46.696745 2497 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:13:46.995621 kubelet[2497]: E0905 00:13:46.995403 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:46.995621 kubelet[2497]: E0905 00:13:46.995436 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:46.995621 kubelet[2497]: E0905 00:13:46.995548 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:47.295411 kubelet[2497]: I0905 00:13:47.295309 2497 apiserver.go:52] "Watching apiserver" Sep 5 00:13:47.309674 kubelet[2497]: I0905 00:13:47.309619 2497 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:13:47.335499 kubelet[2497]: I0905 00:13:47.335372 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:47.335499 kubelet[2497]: E0905 00:13:47.335421 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:47.335600 kubelet[2497]: E0905 00:13:47.335575 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:47.366815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2533) Sep 5 00:13:47.405810 kubelet[2497]: E0905 00:13:47.404592 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:13:47.405810 kubelet[2497]: E0905 00:13:47.404766 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:47.413853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2531) Sep 5 00:13:47.449808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2531) Sep 5 00:13:47.612311 kubelet[2497]: I0905 00:13:47.612110 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.612089665 podStartE2EDuration="1.612089665s" podCreationTimestamp="2025-09-05 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:13:47.445453368 +0000 UTC m=+1.213323101" watchObservedRunningTime="2025-09-05 00:13:47.612089665 +0000 UTC m=+1.379959399" Sep 5 00:13:47.612833 kubelet[2497]: I0905 00:13:47.612763 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.612756216 podStartE2EDuration="1.612756216s" podCreationTimestamp="2025-09-05 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:13:47.612739983 +0000 UTC m=+1.380609716" watchObservedRunningTime="2025-09-05 00:13:47.612756216 +0000 UTC m=+1.380625949" Sep 5 00:13:47.930949 kubelet[2497]: I0905 00:13:47.930879 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9308632860000001 podStartE2EDuration="1.930863286s" podCreationTimestamp="2025-09-05 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:13:47.769378052 +0000 UTC m=+1.537247805" watchObservedRunningTime="2025-09-05 00:13:47.930863286 +0000 UTC m=+1.698733019" Sep 5 00:13:48.002089 sudo[2548]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 00:13:48.002454 sudo[2548]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 00:13:48.336988 kubelet[2497]: E0905 00:13:48.336856 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:48.337379 kubelet[2497]: E0905 00:13:48.337104 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:48.489087 sudo[2548]: pam_unix(sudo:session): session closed for user root Sep 5 00:13:49.338389 kubelet[2497]: E0905 00:13:49.338351 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:50.162742 kubelet[2497]: I0905 00:13:50.162694 2497 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:13:50.163192 containerd[1459]: time="2025-09-05T00:13:50.163140531Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:13:50.163544 kubelet[2497]: I0905 00:13:50.163349 2497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:13:50.340399 kubelet[2497]: E0905 00:13:50.340344 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:51.712147 kubelet[2497]: E0905 00:13:51.712111 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.099883 systemd[1]: Created slice kubepods-besteffort-podbcd85f75_cbfd_4c8b_80bd_41651ab1e466.slice - libcontainer container kubepods-besteffort-podbcd85f75_cbfd_4c8b_80bd_41651ab1e466.slice. Sep 5 00:13:52.110119 systemd[1]: Created slice kubepods-besteffort-pod33b9d1d7_f5d9_4ada_bf88_94b8b90bca99.slice - libcontainer container kubepods-besteffort-pod33b9d1d7_f5d9_4ada_bf88_94b8b90bca99.slice. Sep 5 00:13:52.121269 systemd[1]: Created slice kubepods-burstable-pod8e650c0f_f847_4fd8_b57e_fe48516d470e.slice - libcontainer container kubepods-burstable-pod8e650c0f_f847_4fd8_b57e_fe48516d470e.slice. Sep 5 00:13:52.145260 kubelet[2497]: I0905 00:13:52.145229 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777dw\" (UniqueName: \"kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-kube-api-access-777dw\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.145260 kubelet[2497]: I0905 00:13:52.145261 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33b9d1d7-f5d9-4ada-bf88-94b8b90bca99-xtables-lock\") pod \"kube-proxy-9wv8w\" (UID: \"33b9d1d7-f5d9-4ada-bf88-94b8b90bca99\") " pod="kube-system/kube-proxy-9wv8w" Sep 5 00:13:52.145390 kubelet[2497]: I0905 00:13:52.145279 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-etc-cni-netd\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.145390 kubelet[2497]: I0905 00:13:52.145302 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-lib-modules\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.145630 kubelet[2497]: I0905 00:13:52.145607 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-bpf-maps\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.145672 kubelet[2497]: I0905 00:13:52.145643 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-cgroup\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.145672 kubelet[2497]: I0905 00:13:52.145665 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-config-path\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.145729 kubelet[2497]: I0905 00:13:52.145715 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-xtables-lock\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146142 kubelet[2497]: I0905 00:13:52.145933 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e650c0f-f847-4fd8-b57e-fe48516d470e-clustermesh-secrets\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146142 kubelet[2497]: I0905 00:13:52.145989 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-kernel\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146142 kubelet[2497]: I0905 00:13:52.146010 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-hubble-tls\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146142 kubelet[2497]: I0905 00:13:52.146093 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gjbks\" (UID: \"bcd85f75-cbfd-4c8b-80bd-41651ab1e466\") " pod="kube-system/cilium-operator-6c4d7847fc-gjbks" Sep 5 00:13:52.146265 kubelet[2497]: I0905 00:13:52.146144 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dh8h\" (UniqueName: \"kubernetes.io/projected/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-kube-api-access-2dh8h\") pod \"cilium-operator-6c4d7847fc-gjbks\" (UID: \"bcd85f75-cbfd-4c8b-80bd-41651ab1e466\") " pod="kube-system/cilium-operator-6c4d7847fc-gjbks" Sep 5 00:13:52.146265 kubelet[2497]: I0905 00:13:52.146209 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-run\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146265 kubelet[2497]: I0905 00:13:52.146255 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33b9d1d7-f5d9-4ada-bf88-94b8b90bca99-kube-proxy\") pod \"kube-proxy-9wv8w\" (UID: \"33b9d1d7-f5d9-4ada-bf88-94b8b90bca99\") " pod="kube-system/kube-proxy-9wv8w" Sep 5 00:13:52.146353 kubelet[2497]: I0905 00:13:52.146293 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cni-path\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146353 kubelet[2497]: I0905 00:13:52.146331 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33b9d1d7-f5d9-4ada-bf88-94b8b90bca99-lib-modules\") pod \"kube-proxy-9wv8w\" (UID: \"33b9d1d7-f5d9-4ada-bf88-94b8b90bca99\") " pod="kube-system/kube-proxy-9wv8w" Sep 5 00:13:52.146353 kubelet[2497]: I0905 00:13:52.146353 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-hostproc\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146438 kubelet[2497]: I0905 00:13:52.146374 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-net\") pod \"cilium-pct5p\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " pod="kube-system/cilium-pct5p" Sep 5 00:13:52.146438 kubelet[2497]: I0905 00:13:52.146395 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd95q\" (UniqueName: \"kubernetes.io/projected/33b9d1d7-f5d9-4ada-bf88-94b8b90bca99-kube-api-access-sd95q\") pod \"kube-proxy-9wv8w\" (UID: \"33b9d1d7-f5d9-4ada-bf88-94b8b90bca99\") " pod="kube-system/kube-proxy-9wv8w" Sep 5 00:13:52.164505 sudo[1635]: pam_unix(sudo:session): session closed for user root Sep 5 00:13:52.166328 sshd[1632]: pam_unix(sshd:session): session closed for user core Sep 5 00:13:52.170545 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:41460.service: Deactivated successfully. Sep 5 00:13:52.172728 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:13:52.172941 systemd[1]: session-7.scope: Consumed 6.965s CPU time, 160.4M memory peak, 0B memory swap peak. Sep 5 00:13:52.173401 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:13:52.174301 systemd-logind[1441]: Removed session 7. Sep 5 00:13:52.309788 kubelet[2497]: E0905 00:13:52.309710 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.342462 kubelet[2497]: E0905 00:13:52.342364 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.342643 kubelet[2497]: E0905 00:13:52.342619 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.707989 kubelet[2497]: E0905 00:13:52.707929 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.708764 containerd[1459]: time="2025-09-05T00:13:52.708704233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gjbks,Uid:bcd85f75-cbfd-4c8b-80bd-41651ab1e466,Namespace:kube-system,Attempt:0,}" Sep 5 00:13:52.713007 kubelet[2497]: E0905 00:13:52.712971 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.713363 containerd[1459]: time="2025-09-05T00:13:52.713333914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wv8w,Uid:33b9d1d7-f5d9-4ada-bf88-94b8b90bca99,Namespace:kube-system,Attempt:0,}" Sep 5 00:13:52.725370 kubelet[2497]: E0905 00:13:52.725345 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:52.725704 containerd[1459]: time="2025-09-05T00:13:52.725672908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pct5p,Uid:8e650c0f-f847-4fd8-b57e-fe48516d470e,Namespace:kube-system,Attempt:0,}" Sep 5 00:13:53.343301 kubelet[2497]: E0905 00:13:53.343266 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:53.733722 containerd[1459]: time="2025-09-05T00:13:53.733452857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:53.733722 containerd[1459]: time="2025-09-05T00:13:53.733532416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:53.733722 containerd[1459]: time="2025-09-05T00:13:53.733554331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:53.733722 containerd[1459]: time="2025-09-05T00:13:53.733647448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:53.766934 systemd[1]: Started cri-containerd-a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc.scope - libcontainer container a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc. Sep 5 00:13:53.787172 containerd[1459]: time="2025-09-05T00:13:53.787064376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:53.787172 containerd[1459]: time="2025-09-05T00:13:53.787129848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:53.787172 containerd[1459]: time="2025-09-05T00:13:53.787142193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:53.787573 containerd[1459]: time="2025-09-05T00:13:53.787507247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:53.811257 containerd[1459]: time="2025-09-05T00:13:53.811205317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gjbks,Uid:bcd85f75-cbfd-4c8b-80bd-41651ab1e466,Namespace:kube-system,Attempt:0,} returns sandbox id \"a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc\"" Sep 5 00:13:53.811821 kubelet[2497]: E0905 00:13:53.811755 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:53.815075 containerd[1459]: time="2025-09-05T00:13:53.813035247Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 00:13:53.815053 systemd[1]: Started cri-containerd-f2c811c95a1544ad08d130670f74414fa704e94a6a9980e4abc6f36d605ff2b8.scope - libcontainer container f2c811c95a1544ad08d130670f74414fa704e94a6a9980e4abc6f36d605ff2b8. Sep 5 00:13:53.835356 containerd[1459]: time="2025-09-05T00:13:53.834965241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:13:53.835356 containerd[1459]: time="2025-09-05T00:13:53.835056444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:13:53.835356 containerd[1459]: time="2025-09-05T00:13:53.835114310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:53.835356 containerd[1459]: time="2025-09-05T00:13:53.835255696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:13:53.848878 containerd[1459]: time="2025-09-05T00:13:53.848805927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wv8w,Uid:33b9d1d7-f5d9-4ada-bf88-94b8b90bca99,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2c811c95a1544ad08d130670f74414fa704e94a6a9980e4abc6f36d605ff2b8\"" Sep 5 00:13:53.849661 kubelet[2497]: E0905 00:13:53.849627 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:53.851744 containerd[1459]: time="2025-09-05T00:13:53.851570627Z" level=info msg="CreateContainer within sandbox \"f2c811c95a1544ad08d130670f74414fa704e94a6a9980e4abc6f36d605ff2b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:13:53.861932 systemd[1]: Started cri-containerd-92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544.scope - libcontainer container 92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544. Sep 5 00:13:53.885506 containerd[1459]: time="2025-09-05T00:13:53.885456155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pct5p,Uid:8e650c0f-f847-4fd8-b57e-fe48516d470e,Namespace:kube-system,Attempt:0,} returns sandbox id \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\"" Sep 5 00:13:53.886577 kubelet[2497]: E0905 00:13:53.886551 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:54.864583 containerd[1459]: time="2025-09-05T00:13:54.864523261Z" level=info msg="CreateContainer within sandbox \"f2c811c95a1544ad08d130670f74414fa704e94a6a9980e4abc6f36d605ff2b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef2e868526f1fe3170bbf7e468a2fe4c6acaf52f7ec417655241f21ac6f16ebc\"" Sep 5 00:13:54.865211 containerd[1459]: time="2025-09-05T00:13:54.865028194Z" level=info msg="StartContainer for \"ef2e868526f1fe3170bbf7e468a2fe4c6acaf52f7ec417655241f21ac6f16ebc\"" Sep 5 00:13:54.902914 systemd[1]: Started cri-containerd-ef2e868526f1fe3170bbf7e468a2fe4c6acaf52f7ec417655241f21ac6f16ebc.scope - libcontainer container ef2e868526f1fe3170bbf7e468a2fe4c6acaf52f7ec417655241f21ac6f16ebc. Sep 5 00:13:55.000015 containerd[1459]: time="2025-09-05T00:13:54.999959091Z" level=info msg="StartContainer for \"ef2e868526f1fe3170bbf7e468a2fe4c6acaf52f7ec417655241f21ac6f16ebc\" returns successfully" Sep 5 00:13:55.351546 kubelet[2497]: E0905 00:13:55.351509 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:55.394503 kubelet[2497]: I0905 00:13:55.394438 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9wv8w" podStartSLOduration=4.394419763 podStartE2EDuration="4.394419763s" podCreationTimestamp="2025-09-05 00:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:13:55.394170424 +0000 UTC m=+9.162040157" watchObservedRunningTime="2025-09-05 00:13:55.394419763 +0000 UTC m=+9.162289506" Sep 5 00:13:56.353231 kubelet[2497]: E0905 00:13:56.353182 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:57.216948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454439780.mount: Deactivated successfully. Sep 5 00:13:58.472674 kubelet[2497]: E0905 00:13:58.472623 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:58.606250 containerd[1459]: time="2025-09-05T00:13:58.606179063Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:58.632862 containerd[1459]: time="2025-09-05T00:13:58.632744518Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 00:13:58.651102 containerd[1459]: time="2025-09-05T00:13:58.651018229Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:13:58.679134 containerd[1459]: time="2025-09-05T00:13:58.679052481Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.865984378s" Sep 5 00:13:58.679134 containerd[1459]: time="2025-09-05T00:13:58.679113091Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 00:13:58.681188 containerd[1459]: time="2025-09-05T00:13:58.681163130Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 00:13:58.682171 containerd[1459]: time="2025-09-05T00:13:58.682117795Z" level=info msg="CreateContainer within sandbox \"a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 00:13:58.963644 containerd[1459]: time="2025-09-05T00:13:58.963559331Z" level=info msg="CreateContainer within sandbox \"a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\"" Sep 5 00:13:58.964144 containerd[1459]: time="2025-09-05T00:13:58.964119213Z" level=info msg="StartContainer for \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\"" Sep 5 00:13:58.998062 systemd[1]: Started cri-containerd-5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762.scope - libcontainer container 5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762. Sep 5 00:13:59.089315 containerd[1459]: time="2025-09-05T00:13:59.089249799Z" level=info msg="StartContainer for \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\" returns successfully" Sep 5 00:13:59.358923 kubelet[2497]: E0905 00:13:59.358754 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:13:59.650638 kubelet[2497]: I0905 00:13:59.650081 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gjbks" podStartSLOduration=3.782573544 podStartE2EDuration="8.650063982s" podCreationTimestamp="2025-09-05 00:13:51 +0000 UTC" firstStartedPulling="2025-09-05 00:13:53.812655373 +0000 UTC m=+7.580525106" lastFinishedPulling="2025-09-05 00:13:58.680145811 +0000 UTC m=+12.448015544" observedRunningTime="2025-09-05 00:13:59.649933122 +0000 UTC m=+13.417802855" watchObservedRunningTime="2025-09-05 00:13:59.650063982 +0000 UTC m=+13.417933715" Sep 5 00:14:00.360033 kubelet[2497]: E0905 00:14:00.359996 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:07.956305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713826351.mount: Deactivated successfully. Sep 5 00:14:11.972209 containerd[1459]: time="2025-09-05T00:14:11.972116518Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:14:12.046680 containerd[1459]: time="2025-09-05T00:14:12.046557601Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 00:14:12.128824 containerd[1459]: time="2025-09-05T00:14:12.128751480Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:14:12.130945 containerd[1459]: time="2025-09-05T00:14:12.130902160Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.449706005s" Sep 5 00:14:12.131011 containerd[1459]: time="2025-09-05T00:14:12.130955523Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 00:14:12.135293 containerd[1459]: time="2025-09-05T00:14:12.135256824Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:14:12.488567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268947932.mount: Deactivated successfully. Sep 5 00:14:12.676076 containerd[1459]: time="2025-09-05T00:14:12.675924669Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80\"" Sep 5 00:14:12.676568 containerd[1459]: time="2025-09-05T00:14:12.676521910Z" level=info msg="StartContainer for \"194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80\"" Sep 5 00:14:12.719986 systemd[1]: Started cri-containerd-194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80.scope - libcontainer container 194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80. Sep 5 00:14:12.780237 systemd[1]: cri-containerd-194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80.scope: Deactivated successfully. Sep 5 00:14:12.790539 containerd[1459]: time="2025-09-05T00:14:12.790475509Z" level=info msg="StartContainer for \"194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80\" returns successfully" Sep 5 00:14:13.400533 kubelet[2497]: E0905 00:14:13.400491 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:13.457058 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:53428.service - OpenSSH per-connection server daemon (10.0.0.1:53428). Sep 5 00:14:13.485389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80-rootfs.mount: Deactivated successfully. Sep 5 00:14:13.563183 sshd[3017]: Accepted publickey for core from 10.0.0.1 port 53428 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:13.565409 sshd[3017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:13.570660 systemd-logind[1441]: New session 8 of user core. Sep 5 00:14:13.577061 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:14:14.386026 sshd[3017]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:14.391157 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:53428.service: Deactivated successfully. Sep 5 00:14:14.393578 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:14:14.394344 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:14:14.395364 systemd-logind[1441]: Removed session 8. Sep 5 00:14:14.402666 kubelet[2497]: E0905 00:14:14.402633 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:14.995889 containerd[1459]: time="2025-09-05T00:14:14.993497346Z" level=info msg="shim disconnected" id=194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80 namespace=k8s.io Sep 5 00:14:14.995889 containerd[1459]: time="2025-09-05T00:14:14.995872946Z" level=warning msg="cleaning up after shim disconnected" id=194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80 namespace=k8s.io Sep 5 00:14:14.995889 containerd[1459]: time="2025-09-05T00:14:14.995887354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:14:15.405420 kubelet[2497]: E0905 00:14:15.405389 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:15.407226 containerd[1459]: time="2025-09-05T00:14:15.407182069Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:14:15.663334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076342471.mount: Deactivated successfully. Sep 5 00:14:15.821482 containerd[1459]: time="2025-09-05T00:14:15.821397329Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca\"" Sep 5 00:14:15.822034 containerd[1459]: time="2025-09-05T00:14:15.821979157Z" level=info msg="StartContainer for \"527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca\"" Sep 5 00:14:15.860045 systemd[1]: Started cri-containerd-527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca.scope - libcontainer container 527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca. Sep 5 00:14:15.908684 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:14:15.908959 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:14:15.909030 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:14:15.916133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:14:15.916402 systemd[1]: cri-containerd-527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca.scope: Deactivated successfully. Sep 5 00:14:15.931652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:14:15.956410 containerd[1459]: time="2025-09-05T00:14:15.956354337Z" level=info msg="StartContainer for \"527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca\" returns successfully" Sep 5 00:14:15.974678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca-rootfs.mount: Deactivated successfully. Sep 5 00:14:16.186495 containerd[1459]: time="2025-09-05T00:14:16.186305778Z" level=info msg="shim disconnected" id=527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca namespace=k8s.io Sep 5 00:14:16.186495 containerd[1459]: time="2025-09-05T00:14:16.186365393Z" level=warning msg="cleaning up after shim disconnected" id=527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca namespace=k8s.io Sep 5 00:14:16.186495 containerd[1459]: time="2025-09-05T00:14:16.186374671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:14:16.408685 kubelet[2497]: E0905 00:14:16.408460 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:16.409978 containerd[1459]: time="2025-09-05T00:14:16.409927191Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:14:16.674119 containerd[1459]: time="2025-09-05T00:14:16.674049748Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba\"" Sep 5 00:14:16.674801 containerd[1459]: time="2025-09-05T00:14:16.674438631Z" level=info msg="StartContainer for \"e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba\"" Sep 5 00:14:16.707963 systemd[1]: Started cri-containerd-e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba.scope - libcontainer container e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba. Sep 5 00:14:16.738437 systemd[1]: cri-containerd-e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba.scope: Deactivated successfully. Sep 5 00:14:16.772600 containerd[1459]: time="2025-09-05T00:14:16.772553239Z" level=info msg="StartContainer for \"e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba\" returns successfully" Sep 5 00:14:16.792632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba-rootfs.mount: Deactivated successfully. Sep 5 00:14:16.965488 containerd[1459]: time="2025-09-05T00:14:16.965329817Z" level=info msg="shim disconnected" id=e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba namespace=k8s.io Sep 5 00:14:16.965803 containerd[1459]: time="2025-09-05T00:14:16.965752736Z" level=warning msg="cleaning up after shim disconnected" id=e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba namespace=k8s.io Sep 5 00:14:16.965803 containerd[1459]: time="2025-09-05T00:14:16.965789548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:14:17.412721 kubelet[2497]: E0905 00:14:17.412683 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:17.414673 containerd[1459]: time="2025-09-05T00:14:17.414610721Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:14:17.626768 containerd[1459]: time="2025-09-05T00:14:17.626690218Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279\"" Sep 5 00:14:17.627762 containerd[1459]: time="2025-09-05T00:14:17.627727346Z" level=info msg="StartContainer for \"b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279\"" Sep 5 00:14:17.654946 systemd[1]: Started cri-containerd-b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279.scope - libcontainer container b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279. Sep 5 00:14:17.684940 systemd[1]: cri-containerd-b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279.scope: Deactivated successfully. Sep 5 00:14:17.879495 containerd[1459]: time="2025-09-05T00:14:17.879426996Z" level=info msg="StartContainer for \"b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279\" returns successfully" Sep 5 00:14:17.913422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279-rootfs.mount: Deactivated successfully. Sep 5 00:14:18.056768 containerd[1459]: time="2025-09-05T00:14:18.056601425Z" level=info msg="shim disconnected" id=b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279 namespace=k8s.io Sep 5 00:14:18.056768 containerd[1459]: time="2025-09-05T00:14:18.056657263Z" level=warning msg="cleaning up after shim disconnected" id=b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279 namespace=k8s.io Sep 5 00:14:18.056768 containerd[1459]: time="2025-09-05T00:14:18.056666360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:14:18.416506 kubelet[2497]: E0905 00:14:18.416283 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:18.418188 containerd[1459]: time="2025-09-05T00:14:18.418118137Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:14:18.635715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691057002.mount: Deactivated successfully. Sep 5 00:14:18.752408 containerd[1459]: time="2025-09-05T00:14:18.752241832Z" level=info msg="CreateContainer within sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\"" Sep 5 00:14:18.752841 containerd[1459]: time="2025-09-05T00:14:18.752808087Z" level=info msg="StartContainer for \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\"" Sep 5 00:14:18.784917 systemd[1]: Started cri-containerd-4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59.scope - libcontainer container 4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59. Sep 5 00:14:18.862883 containerd[1459]: time="2025-09-05T00:14:18.862827561Z" level=info msg="StartContainer for \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\" returns successfully" Sep 5 00:14:19.020964 kubelet[2497]: I0905 00:14:19.020852 2497 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:14:19.296650 systemd[1]: Created slice kubepods-burstable-podf820fb56_6654_4624_a8de_365fba6925dd.slice - libcontainer container kubepods-burstable-podf820fb56_6654_4624_a8de_365fba6925dd.slice. Sep 5 00:14:19.312382 systemd[1]: Created slice kubepods-burstable-pode03ca0c2_e447_4667_960c_56e44888939f.slice - libcontainer container kubepods-burstable-pode03ca0c2_e447_4667_960c_56e44888939f.slice. Sep 5 00:14:19.317505 kubelet[2497]: I0905 00:14:19.317472 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7m4\" (UniqueName: \"kubernetes.io/projected/f820fb56-6654-4624-a8de-365fba6925dd-kube-api-access-gf7m4\") pod \"coredns-668d6bf9bc-bqjh6\" (UID: \"f820fb56-6654-4624-a8de-365fba6925dd\") " pod="kube-system/coredns-668d6bf9bc-bqjh6" Sep 5 00:14:19.317595 kubelet[2497]: I0905 00:14:19.317510 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f820fb56-6654-4624-a8de-365fba6925dd-config-volume\") pod \"coredns-668d6bf9bc-bqjh6\" (UID: \"f820fb56-6654-4624-a8de-365fba6925dd\") " pod="kube-system/coredns-668d6bf9bc-bqjh6" Sep 5 00:14:19.404186 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:53438.service - OpenSSH per-connection server daemon (10.0.0.1:53438). Sep 5 00:14:19.417696 kubelet[2497]: I0905 00:14:19.417655 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e03ca0c2-e447-4667-960c-56e44888939f-config-volume\") pod \"coredns-668d6bf9bc-27zll\" (UID: \"e03ca0c2-e447-4667-960c-56e44888939f\") " pod="kube-system/coredns-668d6bf9bc-27zll" Sep 5 00:14:19.417696 kubelet[2497]: I0905 00:14:19.417694 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4448\" (UniqueName: \"kubernetes.io/projected/e03ca0c2-e447-4667-960c-56e44888939f-kube-api-access-h4448\") pod \"coredns-668d6bf9bc-27zll\" (UID: \"e03ca0c2-e447-4667-960c-56e44888939f\") " pod="kube-system/coredns-668d6bf9bc-27zll" Sep 5 00:14:19.423054 kubelet[2497]: E0905 00:14:19.422061 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:19.440691 sshd[3327]: Accepted publickey for core from 10.0.0.1 port 53438 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:19.442618 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:19.449223 systemd-logind[1441]: New session 9 of user core. Sep 5 00:14:19.457919 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:14:19.600358 kubelet[2497]: E0905 00:14:19.600206 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:19.609908 containerd[1459]: time="2025-09-05T00:14:19.609868960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqjh6,Uid:f820fb56-6654-4624-a8de-365fba6925dd,Namespace:kube-system,Attempt:0,}" Sep 5 00:14:19.884396 kubelet[2497]: I0905 00:14:19.884204 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pct5p" podStartSLOduration=10.551721474 podStartE2EDuration="28.798488225s" podCreationTimestamp="2025-09-05 00:13:51 +0000 UTC" firstStartedPulling="2025-09-05 00:13:53.887095522 +0000 UTC m=+7.654965255" lastFinishedPulling="2025-09-05 00:14:12.133862273 +0000 UTC m=+25.901732006" observedRunningTime="2025-09-05 00:14:19.633686141 +0000 UTC m=+33.401555874" watchObservedRunningTime="2025-09-05 00:14:19.798488225 +0000 UTC m=+33.566357958" Sep 5 00:14:19.917748 kubelet[2497]: E0905 00:14:19.917705 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:19.918279 containerd[1459]: time="2025-09-05T00:14:19.918244224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27zll,Uid:e03ca0c2-e447-4667-960c-56e44888939f,Namespace:kube-system,Attempt:0,}" Sep 5 00:14:19.984846 sshd[3327]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:19.989001 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:53438.service: Deactivated successfully. Sep 5 00:14:19.991244 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:14:19.991952 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:14:19.992983 systemd-logind[1441]: Removed session 9. Sep 5 00:14:20.730049 kubelet[2497]: E0905 00:14:20.726683 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:21.393073 systemd-networkd[1394]: cilium_host: Link UP Sep 5 00:14:21.396113 systemd-networkd[1394]: cilium_net: Link UP Sep 5 00:14:21.396119 systemd-networkd[1394]: cilium_net: Gained carrier Sep 5 00:14:21.399996 systemd-networkd[1394]: cilium_host: Gained carrier Sep 5 00:14:21.400360 systemd-networkd[1394]: cilium_host: Gained IPv6LL Sep 5 00:14:21.625295 systemd-networkd[1394]: cilium_vxlan: Link UP Sep 5 00:14:21.625310 systemd-networkd[1394]: cilium_vxlan: Gained carrier Sep 5 00:14:21.768236 systemd-networkd[1394]: cilium_net: Gained IPv6LL Sep 5 00:14:21.940811 kernel: NET: Registered PF_ALG protocol family Sep 5 00:14:22.654382 systemd-networkd[1394]: lxc_health: Link UP Sep 5 00:14:22.658733 systemd-networkd[1394]: lxc_health: Gained carrier Sep 5 00:14:22.727887 kubelet[2497]: E0905 00:14:22.727572 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:22.867830 systemd-networkd[1394]: lxc038e63e2dd25: Link UP Sep 5 00:14:22.890833 kernel: eth0: renamed from tmp4da9a Sep 5 00:14:22.896623 systemd-networkd[1394]: lxc038e63e2dd25: Gained carrier Sep 5 00:14:22.897276 systemd-networkd[1394]: lxcb8b455e293d8: Link UP Sep 5 00:14:22.950011 kernel: eth0: renamed from tmp36aab Sep 5 00:14:22.956224 systemd-networkd[1394]: lxcb8b455e293d8: Gained carrier Sep 5 00:14:23.081095 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Sep 5 00:14:23.453011 kubelet[2497]: E0905 00:14:23.452983 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:23.910993 systemd-networkd[1394]: lxc_health: Gained IPv6LL Sep 5 00:14:24.105059 systemd-networkd[1394]: lxc038e63e2dd25: Gained IPv6LL Sep 5 00:14:24.423267 systemd-networkd[1394]: lxcb8b455e293d8: Gained IPv6LL Sep 5 00:14:24.454431 kubelet[2497]: E0905 00:14:24.454410 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:25.016094 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:45816.service - OpenSSH per-connection server daemon (10.0.0.1:45816). Sep 5 00:14:25.058705 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 45816 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:25.060898 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:25.065726 systemd-logind[1441]: New session 10 of user core. Sep 5 00:14:25.077958 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:14:25.238727 sshd[3748]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:25.243655 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:45816.service: Deactivated successfully. Sep 5 00:14:25.246027 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:14:25.246932 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:14:25.247971 systemd-logind[1441]: Removed session 10. Sep 5 00:14:26.401770 containerd[1459]: time="2025-09-05T00:14:26.401669265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:14:26.401770 containerd[1459]: time="2025-09-05T00:14:26.401727547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:14:26.401770 containerd[1459]: time="2025-09-05T00:14:26.401739100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:14:26.402281 containerd[1459]: time="2025-09-05T00:14:26.401879871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:14:26.421944 systemd[1]: Started cri-containerd-4da9a96320b92d18e5b5db2030f35e802210a695d4e94ee913d0de1b06a7b859.scope - libcontainer container 4da9a96320b92d18e5b5db2030f35e802210a695d4e94ee913d0de1b06a7b859. Sep 5 00:14:26.430290 containerd[1459]: time="2025-09-05T00:14:26.430184565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:14:26.431335 containerd[1459]: time="2025-09-05T00:14:26.431296368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:14:26.431530 containerd[1459]: time="2025-09-05T00:14:26.431408082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:14:26.431654 containerd[1459]: time="2025-09-05T00:14:26.431624420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:14:26.440420 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:14:26.456909 systemd[1]: Started cri-containerd-36aab700eacea8aec3caac372ce439f99dd645d571f27be5951d29ff3ef03157.scope - libcontainer container 36aab700eacea8aec3caac372ce439f99dd645d571f27be5951d29ff3ef03157. Sep 5 00:14:26.471250 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:14:26.472194 containerd[1459]: time="2025-09-05T00:14:26.472099486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27zll,Uid:e03ca0c2-e447-4667-960c-56e44888939f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4da9a96320b92d18e5b5db2030f35e802210a695d4e94ee913d0de1b06a7b859\"" Sep 5 00:14:26.473151 kubelet[2497]: E0905 00:14:26.472970 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:26.476237 containerd[1459]: time="2025-09-05T00:14:26.476200779Z" level=info msg="CreateContainer within sandbox \"4da9a96320b92d18e5b5db2030f35e802210a695d4e94ee913d0de1b06a7b859\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:14:26.496893 containerd[1459]: time="2025-09-05T00:14:26.496860301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqjh6,Uid:f820fb56-6654-4624-a8de-365fba6925dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"36aab700eacea8aec3caac372ce439f99dd645d571f27be5951d29ff3ef03157\"" Sep 5 00:14:26.497864 kubelet[2497]: E0905 00:14:26.497824 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:26.499525 containerd[1459]: time="2025-09-05T00:14:26.499486310Z" level=info msg="CreateContainer within sandbox \"36aab700eacea8aec3caac372ce439f99dd645d571f27be5951d29ff3ef03157\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:14:27.061155 containerd[1459]: time="2025-09-05T00:14:27.061082956Z" level=info msg="CreateContainer within sandbox \"4da9a96320b92d18e5b5db2030f35e802210a695d4e94ee913d0de1b06a7b859\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02670a03fc552ab816bcdc623d5dd735dfd7bc61590737d2a6619b2e35e5dc4d\"" Sep 5 00:14:27.061705 containerd[1459]: time="2025-09-05T00:14:27.061654507Z" level=info msg="StartContainer for \"02670a03fc552ab816bcdc623d5dd735dfd7bc61590737d2a6619b2e35e5dc4d\"" Sep 5 00:14:27.090907 systemd[1]: Started cri-containerd-02670a03fc552ab816bcdc623d5dd735dfd7bc61590737d2a6619b2e35e5dc4d.scope - libcontainer container 02670a03fc552ab816bcdc623d5dd735dfd7bc61590737d2a6619b2e35e5dc4d. Sep 5 00:14:27.185790 containerd[1459]: time="2025-09-05T00:14:27.185742098Z" level=info msg="CreateContainer within sandbox \"36aab700eacea8aec3caac372ce439f99dd645d571f27be5951d29ff3ef03157\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be2d8da06f67fbddb933dcae70ce8b76f424687510a20a95724704dbb8a876e1\"" Sep 5 00:14:27.186429 containerd[1459]: time="2025-09-05T00:14:27.186385297Z" level=info msg="StartContainer for \"be2d8da06f67fbddb933dcae70ce8b76f424687510a20a95724704dbb8a876e1\"" Sep 5 00:14:27.214933 systemd[1]: Started cri-containerd-be2d8da06f67fbddb933dcae70ce8b76f424687510a20a95724704dbb8a876e1.scope - libcontainer container be2d8da06f67fbddb933dcae70ce8b76f424687510a20a95724704dbb8a876e1. Sep 5 00:14:27.408363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336634305.mount: Deactivated successfully. Sep 5 00:14:27.425922 containerd[1459]: time="2025-09-05T00:14:27.425709861Z" level=info msg="StartContainer for \"02670a03fc552ab816bcdc623d5dd735dfd7bc61590737d2a6619b2e35e5dc4d\" returns successfully" Sep 5 00:14:27.425922 containerd[1459]: time="2025-09-05T00:14:27.425746351Z" level=info msg="StartContainer for \"be2d8da06f67fbddb933dcae70ce8b76f424687510a20a95724704dbb8a876e1\" returns successfully" Sep 5 00:14:27.461766 kubelet[2497]: E0905 00:14:27.461734 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:27.463542 kubelet[2497]: E0905 00:14:27.463412 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:27.516715 kubelet[2497]: I0905 00:14:27.516631 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bqjh6" podStartSLOduration=36.516575305 podStartE2EDuration="36.516575305s" podCreationTimestamp="2025-09-05 00:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:14:27.515701433 +0000 UTC m=+41.283571166" watchObservedRunningTime="2025-09-05 00:14:27.516575305 +0000 UTC m=+41.284445048" Sep 5 00:14:27.664258 kubelet[2497]: I0905 00:14:27.664105 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-27zll" podStartSLOduration=36.664084104 podStartE2EDuration="36.664084104s" podCreationTimestamp="2025-09-05 00:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:14:27.663254806 +0000 UTC m=+41.431124569" watchObservedRunningTime="2025-09-05 00:14:27.664084104 +0000 UTC m=+41.431953857" Sep 5 00:14:28.465238 kubelet[2497]: E0905 00:14:28.465165 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:28.465501 kubelet[2497]: E0905 00:14:28.465165 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:29.467184 kubelet[2497]: E0905 00:14:29.467135 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:29.467687 kubelet[2497]: E0905 00:14:29.467286 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:30.256265 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:60242.service - OpenSSH per-connection server daemon (10.0.0.1:60242). Sep 5 00:14:30.292704 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 60242 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:30.294657 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:30.298737 systemd-logind[1441]: New session 11 of user core. Sep 5 00:14:30.309974 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:14:30.431312 sshd[3939]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:30.435232 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:60242.service: Deactivated successfully. Sep 5 00:14:30.437225 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:14:30.437931 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:14:30.438836 systemd-logind[1441]: Removed session 11. Sep 5 00:14:35.442591 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:60252.service - OpenSSH per-connection server daemon (10.0.0.1:60252). Sep 5 00:14:35.474516 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 60252 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:35.476257 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:35.480039 systemd-logind[1441]: New session 12 of user core. Sep 5 00:14:35.493919 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:14:35.601759 sshd[3954]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:35.605658 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:60252.service: Deactivated successfully. Sep 5 00:14:35.607664 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:14:35.608253 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:14:35.609181 systemd-logind[1441]: Removed session 12. Sep 5 00:14:40.616859 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:40936.service - OpenSSH per-connection server daemon (10.0.0.1:40936). Sep 5 00:14:40.649217 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 40936 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:40.651010 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:40.655539 systemd-logind[1441]: New session 13 of user core. Sep 5 00:14:40.665916 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:14:40.816555 sshd[3971]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:40.829207 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:40936.service: Deactivated successfully. Sep 5 00:14:40.831535 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:14:40.833354 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:14:40.839049 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:40938.service - OpenSSH per-connection server daemon (10.0.0.1:40938). Sep 5 00:14:40.840027 systemd-logind[1441]: Removed session 13. Sep 5 00:14:40.871364 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 40938 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:40.873121 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:40.877442 systemd-logind[1441]: New session 14 of user core. Sep 5 00:14:40.895924 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:14:41.133456 sshd[3987]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:41.141962 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:40938.service: Deactivated successfully. Sep 5 00:14:41.145095 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:14:41.147439 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:14:41.157108 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:40954.service - OpenSSH per-connection server daemon (10.0.0.1:40954). Sep 5 00:14:41.158173 systemd-logind[1441]: Removed session 14. Sep 5 00:14:41.186053 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 40954 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:41.187898 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:41.192211 systemd-logind[1441]: New session 15 of user core. Sep 5 00:14:41.201907 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:14:41.516380 sshd[4000]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:41.520805 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:40954.service: Deactivated successfully. Sep 5 00:14:41.522933 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:14:41.523651 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:14:41.524598 systemd-logind[1441]: Removed session 15. Sep 5 00:14:46.527625 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:40960.service - OpenSSH per-connection server daemon (10.0.0.1:40960). Sep 5 00:14:46.560397 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 40960 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:46.562309 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:46.566499 systemd-logind[1441]: New session 16 of user core. Sep 5 00:14:46.572905 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:14:46.686107 sshd[4018]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:46.691179 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:40960.service: Deactivated successfully. Sep 5 00:14:46.693546 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:14:46.694256 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:14:46.695297 systemd-logind[1441]: Removed session 16. Sep 5 00:14:51.698019 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:52236.service - OpenSSH per-connection server daemon (10.0.0.1:52236). Sep 5 00:14:51.730473 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 52236 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:51.732141 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:51.735996 systemd-logind[1441]: New session 17 of user core. Sep 5 00:14:51.743911 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:14:51.849324 sshd[4033]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:51.853144 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:52236.service: Deactivated successfully. Sep 5 00:14:51.855106 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:14:51.855730 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:14:51.856727 systemd-logind[1441]: Removed session 17. Sep 5 00:14:56.326880 kubelet[2497]: E0905 00:14:56.326832 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:14:56.860970 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:52248.service - OpenSSH per-connection server daemon (10.0.0.1:52248). Sep 5 00:14:56.893104 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 52248 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:14:56.894599 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:14:56.898733 systemd-logind[1441]: New session 18 of user core. Sep 5 00:14:56.914934 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:14:57.109299 sshd[4049]: pam_unix(sshd:session): session closed for user core Sep 5 00:14:57.113497 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:52248.service: Deactivated successfully. Sep 5 00:14:57.115944 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:14:57.116673 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:14:57.117689 systemd-logind[1441]: Removed session 18. Sep 5 00:15:02.120843 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:48046.service - OpenSSH per-connection server daemon (10.0.0.1:48046). Sep 5 00:15:02.152411 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 48046 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:02.153870 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:02.157702 systemd-logind[1441]: New session 19 of user core. Sep 5 00:15:02.168912 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:15:02.272995 sshd[4064]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:02.283680 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:48046.service: Deactivated successfully. Sep 5 00:15:02.285720 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:15:02.287279 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:15:02.295081 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:48050.service - OpenSSH per-connection server daemon (10.0.0.1:48050). Sep 5 00:15:02.296043 systemd-logind[1441]: Removed session 19. Sep 5 00:15:02.323502 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:02.325403 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:02.329675 systemd-logind[1441]: New session 20 of user core. Sep 5 00:15:02.348919 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:15:03.243461 sshd[4079]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:03.255896 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:48050.service: Deactivated successfully. Sep 5 00:15:03.258132 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:15:03.259984 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:15:03.261500 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:48052.service - OpenSSH per-connection server daemon (10.0.0.1:48052). Sep 5 00:15:03.262713 systemd-logind[1441]: Removed session 20. Sep 5 00:15:03.296943 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 48052 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:03.298560 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:03.303970 systemd-logind[1441]: New session 21 of user core. Sep 5 00:15:03.307921 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:15:04.806078 sshd[4093]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:04.816141 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:48052.service: Deactivated successfully. Sep 5 00:15:04.818251 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:15:04.820062 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:15:04.831448 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:48056.service - OpenSSH per-connection server daemon (10.0.0.1:48056). Sep 5 00:15:04.832538 systemd-logind[1441]: Removed session 21. Sep 5 00:15:04.858711 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 48056 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:04.860250 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:04.864399 systemd-logind[1441]: New session 22 of user core. Sep 5 00:15:04.870896 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:15:05.871592 sshd[4113]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:05.886928 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:48056.service: Deactivated successfully. Sep 5 00:15:05.889051 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:15:05.890901 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:15:05.899371 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:48066.service - OpenSSH per-connection server daemon (10.0.0.1:48066). Sep 5 00:15:05.900374 systemd-logind[1441]: Removed session 22. Sep 5 00:15:05.926514 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 48066 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:05.928235 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:05.932523 systemd-logind[1441]: New session 23 of user core. Sep 5 00:15:05.941966 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:15:06.059338 sshd[4126]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:06.063426 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:48066.service: Deactivated successfully. Sep 5 00:15:06.065543 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:15:06.066178 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:15:06.067080 systemd-logind[1441]: Removed session 23. Sep 5 00:15:09.324666 kubelet[2497]: E0905 00:15:09.324626 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:11.071885 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:39924.service - OpenSSH per-connection server daemon (10.0.0.1:39924). Sep 5 00:15:11.103959 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 39924 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:11.105694 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:11.109896 systemd-logind[1441]: New session 24 of user core. Sep 5 00:15:11.117925 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:15:11.226349 sshd[4141]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:11.230238 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:39924.service: Deactivated successfully. Sep 5 00:15:11.232341 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:15:11.232940 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:15:11.233875 systemd-logind[1441]: Removed session 24. Sep 5 00:15:13.324209 kubelet[2497]: E0905 00:15:13.324146 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:16.237814 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:39932.service - OpenSSH per-connection server daemon (10.0.0.1:39932). Sep 5 00:15:16.270703 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 39932 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:16.272523 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:16.276611 systemd-logind[1441]: New session 25 of user core. Sep 5 00:15:16.288948 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:15:16.398588 sshd[4155]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:16.402507 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:39932.service: Deactivated successfully. Sep 5 00:15:16.404569 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:15:16.405316 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:15:16.406374 systemd-logind[1441]: Removed session 25. Sep 5 00:15:21.410351 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:56598.service - OpenSSH per-connection server daemon (10.0.0.1:56598). Sep 5 00:15:21.442621 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 56598 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:21.460973 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:21.465649 systemd-logind[1441]: New session 26 of user core. Sep 5 00:15:21.477908 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:15:21.642019 sshd[4169]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:21.646185 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:56598.service: Deactivated successfully. Sep 5 00:15:21.648173 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:15:21.648836 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:15:21.649691 systemd-logind[1441]: Removed session 26. Sep 5 00:15:24.324706 kubelet[2497]: E0905 00:15:24.324660 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:25.324573 kubelet[2497]: E0905 00:15:25.324531 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:26.653616 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:56600.service - OpenSSH per-connection server daemon (10.0.0.1:56600). Sep 5 00:15:26.685437 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 56600 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:26.686878 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:26.690639 systemd-logind[1441]: New session 27 of user core. Sep 5 00:15:26.703918 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:15:26.850610 sshd[4187]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:26.854340 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:56600.service: Deactivated successfully. Sep 5 00:15:26.856539 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:15:26.857142 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:15:26.858236 systemd-logind[1441]: Removed session 27. Sep 5 00:15:31.324720 kubelet[2497]: E0905 00:15:31.324681 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:31.865524 systemd[1]: Started sshd@27-10.0.0.79:22-10.0.0.1:53156.service - OpenSSH per-connection server daemon (10.0.0.1:53156). Sep 5 00:15:31.896994 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 53156 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:31.898546 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:31.902133 systemd-logind[1441]: New session 28 of user core. Sep 5 00:15:31.915895 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:15:32.097027 sshd[4201]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:32.100738 systemd[1]: sshd@27-10.0.0.79:22-10.0.0.1:53156.service: Deactivated successfully. Sep 5 00:15:32.102644 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:15:32.103224 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:15:32.104194 systemd-logind[1441]: Removed session 28. Sep 5 00:15:36.325442 kubelet[2497]: E0905 00:15:36.325401 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:37.113170 systemd[1]: Started sshd@28-10.0.0.79:22-10.0.0.1:53160.service - OpenSSH per-connection server daemon (10.0.0.1:53160). Sep 5 00:15:37.146146 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 53160 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:37.147822 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:37.152468 systemd-logind[1441]: New session 29 of user core. Sep 5 00:15:37.161916 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 5 00:15:37.288308 sshd[4216]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:37.292137 systemd[1]: sshd@28-10.0.0.79:22-10.0.0.1:53160.service: Deactivated successfully. Sep 5 00:15:37.294049 systemd[1]: session-29.scope: Deactivated successfully. Sep 5 00:15:37.294640 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. Sep 5 00:15:37.295514 systemd-logind[1441]: Removed session 29. Sep 5 00:15:42.300222 systemd[1]: Started sshd@29-10.0.0.79:22-10.0.0.1:44242.service - OpenSSH per-connection server daemon (10.0.0.1:44242). Sep 5 00:15:42.333130 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 44242 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:42.334770 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:42.339735 systemd-logind[1441]: New session 30 of user core. Sep 5 00:15:42.352993 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 5 00:15:42.457315 sshd[4232]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:42.469842 systemd[1]: sshd@29-10.0.0.79:22-10.0.0.1:44242.service: Deactivated successfully. Sep 5 00:15:42.472225 systemd[1]: session-30.scope: Deactivated successfully. Sep 5 00:15:42.473970 systemd-logind[1441]: Session 30 logged out. Waiting for processes to exit. Sep 5 00:15:42.485192 systemd[1]: Started sshd@30-10.0.0.79:22-10.0.0.1:44244.service - OpenSSH per-connection server daemon (10.0.0.1:44244). Sep 5 00:15:42.486326 systemd-logind[1441]: Removed session 30. Sep 5 00:15:42.514839 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 44244 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:42.516602 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:42.520784 systemd-logind[1441]: New session 31 of user core. Sep 5 00:15:42.530077 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 5 00:15:44.324138 kubelet[2497]: E0905 00:15:44.324093 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:44.988862 containerd[1459]: time="2025-09-05T00:15:44.988815112Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:15:44.992275 containerd[1459]: time="2025-09-05T00:15:44.992236508Z" level=info msg="StopContainer for \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\" with timeout 2 (s)" Sep 5 00:15:44.992484 containerd[1459]: time="2025-09-05T00:15:44.992456859Z" level=info msg="Stop container \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\" with signal terminated" Sep 5 00:15:44.999485 systemd-networkd[1394]: lxc_health: Link DOWN Sep 5 00:15:44.999494 systemd-networkd[1394]: lxc_health: Lost carrier Sep 5 00:15:45.029225 systemd[1]: cri-containerd-4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59.scope: Deactivated successfully. Sep 5 00:15:45.029610 systemd[1]: cri-containerd-4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59.scope: Consumed 7.243s CPU time. Sep 5 00:15:45.048084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59-rootfs.mount: Deactivated successfully. Sep 5 00:15:45.542487 containerd[1459]: time="2025-09-05T00:15:45.542431695Z" level=info msg="StopContainer for \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\" with timeout 30 (s)" Sep 5 00:15:45.542913 containerd[1459]: time="2025-09-05T00:15:45.542830105Z" level=info msg="Stop container \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\" with signal terminated" Sep 5 00:15:45.553352 systemd[1]: cri-containerd-5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762.scope: Deactivated successfully. Sep 5 00:15:45.588604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762-rootfs.mount: Deactivated successfully. Sep 5 00:15:45.610249 containerd[1459]: time="2025-09-05T00:15:45.610190598Z" level=info msg="shim disconnected" id=4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59 namespace=k8s.io Sep 5 00:15:45.628636 containerd[1459]: time="2025-09-05T00:15:45.610247867Z" level=warning msg="cleaning up after shim disconnected" id=4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59 namespace=k8s.io Sep 5 00:15:45.628636 containerd[1459]: time="2025-09-05T00:15:45.610262816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:45.915896 containerd[1459]: time="2025-09-05T00:15:45.915843800Z" level=info msg="shim disconnected" id=5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762 namespace=k8s.io Sep 5 00:15:45.915896 containerd[1459]: time="2025-09-05T00:15:45.915894767Z" level=warning msg="cleaning up after shim disconnected" id=5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762 namespace=k8s.io Sep 5 00:15:45.916061 containerd[1459]: time="2025-09-05T00:15:45.915902692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:46.094248 containerd[1459]: time="2025-09-05T00:15:46.094187479Z" level=info msg="StopContainer for \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\" returns successfully" Sep 5 00:15:46.104285 sshd[4246]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:46.114507 systemd[1]: sshd@30-10.0.0.79:22-10.0.0.1:44244.service: Deactivated successfully. Sep 5 00:15:46.116421 systemd[1]: session-31.scope: Deactivated successfully. Sep 5 00:15:46.118269 systemd-logind[1441]: Session 31 logged out. Waiting for processes to exit. Sep 5 00:15:46.133177 systemd[1]: Started sshd@31-10.0.0.79:22-10.0.0.1:44260.service - OpenSSH per-connection server daemon (10.0.0.1:44260). Sep 5 00:15:46.134118 systemd-logind[1441]: Removed session 31. Sep 5 00:15:46.156463 containerd[1459]: time="2025-09-05T00:15:46.156417757Z" level=info msg="StopPodSandbox for \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\"" Sep 5 00:15:46.156547 containerd[1459]: time="2025-09-05T00:15:46.156473544Z" level=info msg="Container to stop \"b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:15:46.156547 containerd[1459]: time="2025-09-05T00:15:46.156485086Z" level=info msg="Container to stop \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:15:46.156547 containerd[1459]: time="2025-09-05T00:15:46.156495034Z" level=info msg="Container to stop \"527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:15:46.156547 containerd[1459]: time="2025-09-05T00:15:46.156506116Z" level=info msg="Container to stop \"e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:15:46.156547 containerd[1459]: time="2025-09-05T00:15:46.156523268Z" level=info msg="Container to stop \"194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:15:46.159207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544-shm.mount: Deactivated successfully. Sep 5 00:15:46.162453 systemd[1]: cri-containerd-92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544.scope: Deactivated successfully. Sep 5 00:15:46.177379 containerd[1459]: time="2025-09-05T00:15:46.177247718Z" level=info msg="StopContainer for \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\" returns successfully" Sep 5 00:15:46.178274 containerd[1459]: time="2025-09-05T00:15:46.178247685Z" level=info msg="StopPodSandbox for \"a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc\"" Sep 5 00:15:46.178358 containerd[1459]: time="2025-09-05T00:15:46.178283463Z" level=info msg="Container to stop \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:15:46.180243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc-shm.mount: Deactivated successfully. Sep 5 00:15:46.182502 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 44260 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:46.183061 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:46.186046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544-rootfs.mount: Deactivated successfully. Sep 5 00:15:46.189503 systemd[1]: cri-containerd-a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc.scope: Deactivated successfully. Sep 5 00:15:46.190056 systemd-logind[1441]: New session 32 of user core. Sep 5 00:15:46.197165 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 5 00:15:46.214089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc-rootfs.mount: Deactivated successfully. Sep 5 00:15:46.378712 kubelet[2497]: E0905 00:15:46.378665 2497 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:15:46.642747 containerd[1459]: time="2025-09-05T00:15:46.641265121Z" level=info msg="shim disconnected" id=92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544 namespace=k8s.io Sep 5 00:15:46.642747 containerd[1459]: time="2025-09-05T00:15:46.641335986Z" level=warning msg="cleaning up after shim disconnected" id=92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544 namespace=k8s.io Sep 5 00:15:46.642747 containerd[1459]: time="2025-09-05T00:15:46.641348139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:46.642747 containerd[1459]: time="2025-09-05T00:15:46.641384458Z" level=info msg="shim disconnected" id=a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc namespace=k8s.io Sep 5 00:15:46.642747 containerd[1459]: time="2025-09-05T00:15:46.641408414Z" level=warning msg="cleaning up after shim disconnected" id=a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc namespace=k8s.io Sep 5 00:15:46.642747 containerd[1459]: time="2025-09-05T00:15:46.641416810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:46.681531 containerd[1459]: time="2025-09-05T00:15:46.681476360Z" level=info msg="TearDown network for sandbox \"a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc\" successfully" Sep 5 00:15:46.681531 containerd[1459]: time="2025-09-05T00:15:46.681521606Z" level=info msg="StopPodSandbox for \"a68a4e1701d9b86c86af06d3151d6e37937ba74baa8e5d1a669033928f8625fc\" returns successfully" Sep 5 00:15:46.685742 containerd[1459]: time="2025-09-05T00:15:46.684992166Z" level=info msg="TearDown network for sandbox \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" successfully" Sep 5 00:15:46.685742 containerd[1459]: time="2025-09-05T00:15:46.685015801Z" level=info msg="StopPodSandbox for \"92dbfdfc50fe15077cdce51c5ac63095c245598652eb15635e1de115ec7da544\" returns successfully" Sep 5 00:15:46.734980 kubelet[2497]: I0905 00:15:46.734946 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-777dw\" (UniqueName: \"kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-kube-api-access-777dw\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.734980 kubelet[2497]: I0905 00:15:46.734977 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-xtables-lock\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735133 kubelet[2497]: I0905 00:15:46.734991 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-cgroup\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735133 kubelet[2497]: I0905 00:15:46.735005 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cni-path\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735133 kubelet[2497]: I0905 00:15:46.735020 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-hostproc\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735133 kubelet[2497]: I0905 00:15:46.735041 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-net\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735133 kubelet[2497]: I0905 00:15:46.735058 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.735133 kubelet[2497]: I0905 00:15:46.735063 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-hubble-tls\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735305 kubelet[2497]: I0905 00:15:46.735101 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-kernel\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735305 kubelet[2497]: I0905 00:15:46.735123 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e650c0f-f847-4fd8-b57e-fe48516d470e-clustermesh-secrets\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735305 kubelet[2497]: I0905 00:15:46.735137 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-etc-cni-netd\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735305 kubelet[2497]: I0905 00:15:46.735152 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-config-path\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735305 kubelet[2497]: I0905 00:15:46.735176 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-cilium-config-path\") pod \"bcd85f75-cbfd-4c8b-80bd-41651ab1e466\" (UID: \"bcd85f75-cbfd-4c8b-80bd-41651ab1e466\") " Sep 5 00:15:46.735305 kubelet[2497]: I0905 00:15:46.735191 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-run\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735494 kubelet[2497]: I0905 00:15:46.735205 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dh8h\" (UniqueName: \"kubernetes.io/projected/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-kube-api-access-2dh8h\") pod \"bcd85f75-cbfd-4c8b-80bd-41651ab1e466\" (UID: \"bcd85f75-cbfd-4c8b-80bd-41651ab1e466\") " Sep 5 00:15:46.735494 kubelet[2497]: I0905 00:15:46.735219 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-bpf-maps\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735494 kubelet[2497]: I0905 00:15:46.735231 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-lib-modules\") pod \"8e650c0f-f847-4fd8-b57e-fe48516d470e\" (UID: \"8e650c0f-f847-4fd8-b57e-fe48516d470e\") " Sep 5 00:15:46.735494 kubelet[2497]: I0905 00:15:46.735257 2497 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.735494 kubelet[2497]: I0905 00:15:46.735275 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.735494 kubelet[2497]: I0905 00:15:46.735291 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.735644 kubelet[2497]: I0905 00:15:46.735366 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.735644 kubelet[2497]: I0905 00:15:46.735398 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.735644 kubelet[2497]: I0905 00:15:46.735423 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.754801 kubelet[2497]: I0905 00:15:46.754516 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.754801 kubelet[2497]: I0905 00:15:46.754524 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.754801 kubelet[2497]: I0905 00:15:46.754523 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.759331 kubelet[2497]: I0905 00:15:46.757810 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-kube-api-access-2dh8h" (OuterVolumeSpecName: "kube-api-access-2dh8h") pod "bcd85f75-cbfd-4c8b-80bd-41651ab1e466" (UID: "bcd85f75-cbfd-4c8b-80bd-41651ab1e466"). InnerVolumeSpecName "kube-api-access-2dh8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:15:46.759331 kubelet[2497]: I0905 00:15:46.757857 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:15:46.759331 kubelet[2497]: I0905 00:15:46.757900 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e650c0f-f847-4fd8-b57e-fe48516d470e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:15:46.759331 kubelet[2497]: I0905 00:15:46.758057 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:15:46.759331 kubelet[2497]: I0905 00:15:46.759036 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:15:46.760855 kubelet[2497]: I0905 00:15:46.760747 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-kube-api-access-777dw" (OuterVolumeSpecName: "kube-api-access-777dw") pod "8e650c0f-f847-4fd8-b57e-fe48516d470e" (UID: "8e650c0f-f847-4fd8-b57e-fe48516d470e"). InnerVolumeSpecName "kube-api-access-777dw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:15:46.761124 kubelet[2497]: I0905 00:15:46.761094 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bcd85f75-cbfd-4c8b-80bd-41651ab1e466" (UID: "bcd85f75-cbfd-4c8b-80bd-41651ab1e466"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:15:46.761269 systemd[1]: var-lib-kubelet-pods-bcd85f75\x2dcbfd\x2d4c8b\x2d80bd\x2d41651ab1e466-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dh8h.mount: Deactivated successfully. Sep 5 00:15:46.761424 systemd[1]: var-lib-kubelet-pods-8e650c0f\x2df847\x2d4fd8\x2db57e\x2dfe48516d470e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 00:15:46.761531 systemd[1]: var-lib-kubelet-pods-8e650c0f\x2df847\x2d4fd8\x2db57e\x2dfe48516d470e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 00:15:46.835700 kubelet[2497]: I0905 00:15:46.835651 2497 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835700 kubelet[2497]: I0905 00:15:46.835680 2497 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e650c0f-f847-4fd8-b57e-fe48516d470e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835700 kubelet[2497]: I0905 00:15:46.835689 2497 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835700 kubelet[2497]: I0905 00:15:46.835698 2497 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835700 kubelet[2497]: I0905 00:15:46.835706 2497 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835700 kubelet[2497]: I0905 00:15:46.835716 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835724 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835732 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835740 2497 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835747 2497 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835755 2497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dh8h\" (UniqueName: \"kubernetes.io/projected/bcd85f75-cbfd-4c8b-80bd-41651ab1e466-kube-api-access-2dh8h\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835763 2497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-777dw\" (UniqueName: \"kubernetes.io/projected/8e650c0f-f847-4fd8-b57e-fe48516d470e-kube-api-access-777dw\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835771 2497 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.835984 kubelet[2497]: I0905 00:15:46.835799 2497 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:46.836154 kubelet[2497]: I0905 00:15:46.835807 2497 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e650c0f-f847-4fd8-b57e-fe48516d470e-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 5 00:15:47.159038 systemd[1]: var-lib-kubelet-pods-8e650c0f\x2df847\x2d4fd8\x2db57e\x2dfe48516d470e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d777dw.mount: Deactivated successfully. Sep 5 00:15:47.614094 kubelet[2497]: I0905 00:15:47.613962 2497 scope.go:117] "RemoveContainer" containerID="4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59" Sep 5 00:15:47.616940 containerd[1459]: time="2025-09-05T00:15:47.616908202Z" level=info msg="RemoveContainer for \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\"" Sep 5 00:15:47.620841 systemd[1]: Removed slice kubepods-burstable-pod8e650c0f_f847_4fd8_b57e_fe48516d470e.slice - libcontainer container kubepods-burstable-pod8e650c0f_f847_4fd8_b57e_fe48516d470e.slice. Sep 5 00:15:47.620958 systemd[1]: kubepods-burstable-pod8e650c0f_f847_4fd8_b57e_fe48516d470e.slice: Consumed 7.353s CPU time. Sep 5 00:15:47.648197 systemd[1]: Removed slice kubepods-besteffort-podbcd85f75_cbfd_4c8b_80bd_41651ab1e466.slice - libcontainer container kubepods-besteffort-podbcd85f75_cbfd_4c8b_80bd_41651ab1e466.slice. Sep 5 00:15:47.714058 containerd[1459]: time="2025-09-05T00:15:47.714003243Z" level=info msg="RemoveContainer for \"4176274da6aaa166588cfb0bddd92b181b1bc956f21610b59dfe3429fc82fc59\" returns successfully" Sep 5 00:15:47.714356 kubelet[2497]: I0905 00:15:47.714312 2497 scope.go:117] "RemoveContainer" containerID="b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279" Sep 5 00:15:47.715245 containerd[1459]: time="2025-09-05T00:15:47.715216997Z" level=info msg="RemoveContainer for \"b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279\"" Sep 5 00:15:47.810317 containerd[1459]: time="2025-09-05T00:15:47.810254864Z" level=info msg="RemoveContainer for \"b6b89a4c82e97cc706f763006a702742201c729c9ec387b6414729cb8554e279\" returns successfully" Sep 5 00:15:47.810551 kubelet[2497]: I0905 00:15:47.810511 2497 scope.go:117] "RemoveContainer" containerID="e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba" Sep 5 00:15:47.811441 containerd[1459]: time="2025-09-05T00:15:47.811419996Z" level=info msg="RemoveContainer for \"e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba\"" Sep 5 00:15:47.933877 containerd[1459]: time="2025-09-05T00:15:47.933835325Z" level=info msg="RemoveContainer for \"e05d74c639ffc25edcd583103c47c22cbd2ef698fff893ba24788098fc4017ba\" returns successfully" Sep 5 00:15:47.934012 kubelet[2497]: I0905 00:15:47.933982 2497 scope.go:117] "RemoveContainer" containerID="527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca" Sep 5 00:15:47.935730 containerd[1459]: time="2025-09-05T00:15:47.935698580Z" level=info msg="RemoveContainer for \"527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca\"" Sep 5 00:15:47.999535 containerd[1459]: time="2025-09-05T00:15:47.999486166Z" level=info msg="RemoveContainer for \"527ec13c0cecc44c8530406d2231eb2a2607b97ea52807ba8915a5bb860673ca\" returns successfully" Sep 5 00:15:47.999751 kubelet[2497]: I0905 00:15:47.999724 2497 scope.go:117] "RemoveContainer" containerID="194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80" Sep 5 00:15:48.000631 containerd[1459]: time="2025-09-05T00:15:48.000609739Z" level=info msg="RemoveContainer for \"194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80\"" Sep 5 00:15:48.091424 containerd[1459]: time="2025-09-05T00:15:48.091378323Z" level=info msg="RemoveContainer for \"194badfab6dea5d8f4d300df2442daa828258365ff50b5b44987fccf58ccfe80\" returns successfully" Sep 5 00:15:48.091566 kubelet[2497]: I0905 00:15:48.091542 2497 scope.go:117] "RemoveContainer" containerID="5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762" Sep 5 00:15:48.092388 containerd[1459]: time="2025-09-05T00:15:48.092360276Z" level=info msg="RemoveContainer for \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\"" Sep 5 00:15:48.198905 containerd[1459]: time="2025-09-05T00:15:48.198787200Z" level=info msg="RemoveContainer for \"5d501a5e157d8bd11b4a37417ca02f298c76d086ccbf47c84053a79dbb486762\" returns successfully" Sep 5 00:15:48.326705 kubelet[2497]: I0905 00:15:48.326660 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e650c0f-f847-4fd8-b57e-fe48516d470e" path="/var/lib/kubelet/pods/8e650c0f-f847-4fd8-b57e-fe48516d470e/volumes" Sep 5 00:15:48.327542 kubelet[2497]: I0905 00:15:48.327517 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd85f75-cbfd-4c8b-80bd-41651ab1e466" path="/var/lib/kubelet/pods/bcd85f75-cbfd-4c8b-80bd-41651ab1e466/volumes" Sep 5 00:15:48.659348 sshd[4342]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:48.668705 systemd[1]: sshd@31-10.0.0.79:22-10.0.0.1:44260.service: Deactivated successfully. Sep 5 00:15:48.670672 systemd[1]: session-32.scope: Deactivated successfully. Sep 5 00:15:48.672386 systemd-logind[1441]: Session 32 logged out. Waiting for processes to exit. Sep 5 00:15:48.679226 systemd[1]: Started sshd@32-10.0.0.79:22-10.0.0.1:44276.service - OpenSSH per-connection server daemon (10.0.0.1:44276). Sep 5 00:15:48.680486 systemd-logind[1441]: Removed session 32. Sep 5 00:15:48.705866 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 44276 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:48.707417 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:48.711327 systemd-logind[1441]: New session 33 of user core. Sep 5 00:15:48.720888 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 5 00:15:48.770547 sshd[4423]: pam_unix(sshd:session): session closed for user core Sep 5 00:15:48.781514 systemd[1]: sshd@32-10.0.0.79:22-10.0.0.1:44276.service: Deactivated successfully. Sep 5 00:15:48.783307 systemd[1]: session-33.scope: Deactivated successfully. Sep 5 00:15:48.784916 systemd-logind[1441]: Session 33 logged out. Waiting for processes to exit. Sep 5 00:15:48.792065 systemd[1]: Started sshd@33-10.0.0.79:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). Sep 5 00:15:48.793050 systemd-logind[1441]: Removed session 33. Sep 5 00:15:48.819707 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:15:48.821146 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:15:48.825261 systemd-logind[1441]: New session 34 of user core. Sep 5 00:15:48.831913 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 5 00:15:50.167853 kubelet[2497]: I0905 00:15:50.167793 2497 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T00:15:50Z","lastTransitionTime":"2025-09-05T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 00:15:50.549868 kubelet[2497]: I0905 00:15:50.549712 2497 memory_manager.go:355] "RemoveStaleState removing state" podUID="bcd85f75-cbfd-4c8b-80bd-41651ab1e466" containerName="cilium-operator" Sep 5 00:15:50.549868 kubelet[2497]: I0905 00:15:50.549744 2497 memory_manager.go:355] "RemoveStaleState removing state" podUID="8e650c0f-f847-4fd8-b57e-fe48516d470e" containerName="cilium-agent" Sep 5 00:15:50.565040 systemd[1]: Created slice kubepods-burstable-pod94aa35ef_a8cd_4fe4_9dcc_d2acb8caf187.slice - libcontainer container kubepods-burstable-pod94aa35ef_a8cd_4fe4_9dcc_d2acb8caf187.slice. Sep 5 00:15:50.656330 kubelet[2497]: I0905 00:15:50.656287 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-xtables-lock\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656330 kubelet[2497]: I0905 00:15:50.656320 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-clustermesh-secrets\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656486 kubelet[2497]: I0905 00:15:50.656353 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-host-proc-sys-kernel\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656486 kubelet[2497]: I0905 00:15:50.656384 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-cni-path\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656486 kubelet[2497]: I0905 00:15:50.656407 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-hostproc\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656486 kubelet[2497]: I0905 00:15:50.656428 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-cilium-config-path\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656486 kubelet[2497]: I0905 00:15:50.656449 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-bpf-maps\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656486 kubelet[2497]: I0905 00:15:50.656469 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-hubble-tls\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656663 kubelet[2497]: I0905 00:15:50.656488 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d67gt\" (UniqueName: \"kubernetes.io/projected/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-kube-api-access-d67gt\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656663 kubelet[2497]: I0905 00:15:50.656511 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-cilium-run\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656663 kubelet[2497]: I0905 00:15:50.656531 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-cilium-cgroup\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656663 kubelet[2497]: I0905 00:15:50.656548 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-host-proc-sys-net\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656663 kubelet[2497]: I0905 00:15:50.656570 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-etc-cni-netd\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656663 kubelet[2497]: I0905 00:15:50.656604 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-lib-modules\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:50.656825 kubelet[2497]: I0905 00:15:50.656652 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187-cilium-ipsec-secrets\") pod \"cilium-bvh8p\" (UID: \"94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187\") " pod="kube-system/cilium-bvh8p" Sep 5 00:15:51.167891 kubelet[2497]: E0905 00:15:51.167830 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:51.168549 containerd[1459]: time="2025-09-05T00:15:51.168470468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvh8p,Uid:94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187,Namespace:kube-system,Attempt:0,}" Sep 5 00:15:51.380247 kubelet[2497]: E0905 00:15:51.380190 2497 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:15:51.519124 containerd[1459]: time="2025-09-05T00:15:51.518111464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:15:51.519124 containerd[1459]: time="2025-09-05T00:15:51.518834133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:15:51.519124 containerd[1459]: time="2025-09-05T00:15:51.518847199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:15:51.519124 containerd[1459]: time="2025-09-05T00:15:51.518932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:15:51.541929 systemd[1]: Started cri-containerd-8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c.scope - libcontainer container 8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c. Sep 5 00:15:51.564295 containerd[1459]: time="2025-09-05T00:15:51.564253403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvh8p,Uid:94aa35ef-a8cd-4fe4-9dcc-d2acb8caf187,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\"" Sep 5 00:15:51.565018 kubelet[2497]: E0905 00:15:51.564986 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:51.566985 containerd[1459]: time="2025-09-05T00:15:51.566954606Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:15:51.994738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742820258.mount: Deactivated successfully. Sep 5 00:15:52.462239 containerd[1459]: time="2025-09-05T00:15:52.462170907Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196\"" Sep 5 00:15:52.462745 containerd[1459]: time="2025-09-05T00:15:52.462721517Z" level=info msg="StartContainer for \"1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196\"" Sep 5 00:15:52.496917 systemd[1]: Started cri-containerd-1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196.scope - libcontainer container 1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196. Sep 5 00:15:52.531328 systemd[1]: cri-containerd-1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196.scope: Deactivated successfully. Sep 5 00:15:52.673626 containerd[1459]: time="2025-09-05T00:15:52.673580815Z" level=info msg="StartContainer for \"1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196\" returns successfully" Sep 5 00:15:52.763321 systemd[1]: run-containerd-runc-k8s.io-1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196-runc.aLF7jJ.mount: Deactivated successfully. Sep 5 00:15:52.763457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196-rootfs.mount: Deactivated successfully. Sep 5 00:15:53.291200 containerd[1459]: time="2025-09-05T00:15:53.291127895Z" level=info msg="shim disconnected" id=1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196 namespace=k8s.io Sep 5 00:15:53.291200 containerd[1459]: time="2025-09-05T00:15:53.291196355Z" level=warning msg="cleaning up after shim disconnected" id=1c7fba55c29dc9324d84eac0f288d93346bfb20bd38377108407c04eae822196 namespace=k8s.io Sep 5 00:15:53.291437 containerd[1459]: time="2025-09-05T00:15:53.291207476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:53.678843 kubelet[2497]: E0905 00:15:53.678760 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:53.681234 containerd[1459]: time="2025-09-05T00:15:53.681191788Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:15:54.510600 containerd[1459]: time="2025-09-05T00:15:54.510525489Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c\"" Sep 5 00:15:54.511330 containerd[1459]: time="2025-09-05T00:15:54.511137326Z" level=info msg="StartContainer for \"569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c\"" Sep 5 00:15:54.540990 systemd[1]: Started cri-containerd-569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c.scope - libcontainer container 569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c. Sep 5 00:15:54.574633 systemd[1]: cri-containerd-569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c.scope: Deactivated successfully. Sep 5 00:15:55.023811 containerd[1459]: time="2025-09-05T00:15:55.023734040Z" level=info msg="StartContainer for \"569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c\" returns successfully" Sep 5 00:15:55.042154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c-rootfs.mount: Deactivated successfully. Sep 5 00:15:56.029155 kubelet[2497]: E0905 00:15:56.029111 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:56.170423 containerd[1459]: time="2025-09-05T00:15:56.170343141Z" level=info msg="shim disconnected" id=569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c namespace=k8s.io Sep 5 00:15:56.170423 containerd[1459]: time="2025-09-05T00:15:56.170406573Z" level=warning msg="cleaning up after shim disconnected" id=569904c360520ba23e837cde90afa38b44fbb78b46f30c5a65ff4c1f9a480f1c namespace=k8s.io Sep 5 00:15:56.170423 containerd[1459]: time="2025-09-05T00:15:56.170416211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:56.381638 kubelet[2497]: E0905 00:15:56.381505 2497 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:15:57.032936 kubelet[2497]: E0905 00:15:57.032895 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:57.034846 containerd[1459]: time="2025-09-05T00:15:57.034808936Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:15:57.814208 containerd[1459]: time="2025-09-05T00:15:57.814102607Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3\"" Sep 5 00:15:57.814840 containerd[1459]: time="2025-09-05T00:15:57.814807243Z" level=info msg="StartContainer for \"b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3\"" Sep 5 00:15:57.847903 systemd[1]: Started cri-containerd-b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3.scope - libcontainer container b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3. Sep 5 00:15:57.881468 systemd[1]: cri-containerd-b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3.scope: Deactivated successfully. Sep 5 00:15:58.073058 containerd[1459]: time="2025-09-05T00:15:58.072209006Z" level=info msg="StartContainer for \"b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3\" returns successfully" Sep 5 00:15:58.091811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3-rootfs.mount: Deactivated successfully. Sep 5 00:15:58.461420 containerd[1459]: time="2025-09-05T00:15:58.461337658Z" level=info msg="shim disconnected" id=b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3 namespace=k8s.io Sep 5 00:15:58.461420 containerd[1459]: time="2025-09-05T00:15:58.461417941Z" level=warning msg="cleaning up after shim disconnected" id=b04e4adc19b0b02b770662b04344e96bfc7eab14964db14b382908915155f0e3 namespace=k8s.io Sep 5 00:15:58.461617 containerd[1459]: time="2025-09-05T00:15:58.461431297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:15:59.079799 kubelet[2497]: E0905 00:15:59.079747 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:15:59.081360 containerd[1459]: time="2025-09-05T00:15:59.081311058Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:15:59.749475 containerd[1459]: time="2025-09-05T00:15:59.749401527Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71\"" Sep 5 00:15:59.750073 containerd[1459]: time="2025-09-05T00:15:59.750023735Z" level=info msg="StartContainer for \"282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71\"" Sep 5 00:15:59.777910 systemd[1]: Started cri-containerd-282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71.scope - libcontainer container 282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71. Sep 5 00:15:59.802463 systemd[1]: cri-containerd-282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71.scope: Deactivated successfully. Sep 5 00:16:00.035193 containerd[1459]: time="2025-09-05T00:16:00.035018017Z" level=info msg="StartContainer for \"282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71\" returns successfully" Sep 5 00:16:00.052233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71-rootfs.mount: Deactivated successfully. Sep 5 00:16:00.101624 kubelet[2497]: E0905 00:16:00.082943 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:00.854122 containerd[1459]: time="2025-09-05T00:16:00.854064749Z" level=info msg="shim disconnected" id=282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71 namespace=k8s.io Sep 5 00:16:00.854122 containerd[1459]: time="2025-09-05T00:16:00.854113241Z" level=warning msg="cleaning up after shim disconnected" id=282f11a28c59858d5e22a56d2638b408957f171d23c410272efb96dc2082cc71 namespace=k8s.io Sep 5 00:16:00.854122 containerd[1459]: time="2025-09-05T00:16:00.854122098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:16:01.145879 kubelet[2497]: E0905 00:16:01.145344 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:01.146951 containerd[1459]: time="2025-09-05T00:16:01.146916351Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:16:01.382770 kubelet[2497]: E0905 00:16:01.382728 2497 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:16:01.868067 containerd[1459]: time="2025-09-05T00:16:01.867975729Z" level=info msg="CreateContainer within sandbox \"8c4227b863643bd4443459cffde09874a9a1eba0995d995bb35ffe2d64e8772c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"004363a913da7a677d97810705d80f473d3965310a1c1214b79af4a73aefd21e\"" Sep 5 00:16:01.868609 containerd[1459]: time="2025-09-05T00:16:01.868581465Z" level=info msg="StartContainer for \"004363a913da7a677d97810705d80f473d3965310a1c1214b79af4a73aefd21e\"" Sep 5 00:16:01.902987 systemd[1]: Started cri-containerd-004363a913da7a677d97810705d80f473d3965310a1c1214b79af4a73aefd21e.scope - libcontainer container 004363a913da7a677d97810705d80f473d3965310a1c1214b79af4a73aefd21e. Sep 5 00:16:02.115482 containerd[1459]: time="2025-09-05T00:16:02.115403272Z" level=info msg="StartContainer for \"004363a913da7a677d97810705d80f473d3965310a1c1214b79af4a73aefd21e\" returns successfully" Sep 5 00:16:02.337816 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 5 00:16:03.152089 kubelet[2497]: E0905 00:16:03.152050 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:04.153870 kubelet[2497]: E0905 00:16:04.153832 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:05.570170 systemd-networkd[1394]: lxc_health: Link UP Sep 5 00:16:05.584342 systemd-networkd[1394]: lxc_health: Gained carrier Sep 5 00:16:07.015033 systemd-networkd[1394]: lxc_health: Gained IPv6LL Sep 5 00:16:07.170422 kubelet[2497]: E0905 00:16:07.169850 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:07.380314 kubelet[2497]: I0905 00:16:07.379547 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bvh8p" podStartSLOduration=18.379533326 podStartE2EDuration="18.379533326s" podCreationTimestamp="2025-09-05 00:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:16:03.357195811 +0000 UTC m=+137.125065544" watchObservedRunningTime="2025-09-05 00:16:07.379533326 +0000 UTC m=+141.147403059" Sep 5 00:16:08.161485 kubelet[2497]: E0905 00:16:08.161444 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:09.163583 kubelet[2497]: E0905 00:16:09.163539 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:16.324668 kubelet[2497]: E0905 00:16:16.324625 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:16:18.278802 sshd[4431]: pam_unix(sshd:session): session closed for user core Sep 5 00:16:18.282508 systemd[1]: sshd@33-10.0.0.79:22-10.0.0.1:44290.service: Deactivated successfully. Sep 5 00:16:18.284515 systemd[1]: session-34.scope: Deactivated successfully. Sep 5 00:16:18.285212 systemd-logind[1441]: Session 34 logged out. Waiting for processes to exit. Sep 5 00:16:18.286108 systemd-logind[1441]: Removed session 34. Sep 5 00:16:18.324662 kubelet[2497]: E0905 00:16:18.324627 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"