Nov 6 00:24:08.088008 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:24:08.088030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:24:08.088040 kernel: BIOS-provided physical RAM map: Nov 6 00:24:08.088047 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 6 00:24:08.088053 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 6 00:24:08.088062 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 6 00:24:08.088070 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 6 00:24:08.088077 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 6 00:24:08.088083 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 6 00:24:08.088090 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 6 00:24:08.088097 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 6 00:24:08.088103 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 6 00:24:08.088110 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 6 00:24:08.088117 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 6 00:24:08.088127 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 6 00:24:08.088135 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 6 00:24:08.088146 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 6 00:24:08.088153 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:24:08.088160 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:24:08.088169 kernel: NX (Execute Disable) protection: active Nov 6 00:24:08.088177 kernel: APIC: Static calls initialized Nov 6 00:24:08.088184 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Nov 6 00:24:08.088191 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Nov 6 00:24:08.088198 kernel: extended physical RAM map: Nov 6 00:24:08.088206 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 6 00:24:08.088213 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 6 00:24:08.088220 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 6 00:24:08.088228 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 6 00:24:08.088235 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Nov 6 00:24:08.088242 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Nov 6 00:24:08.088251 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Nov 6 00:24:08.088258 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Nov 6 00:24:08.088266 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Nov 6 00:24:08.088275 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 6 00:24:08.088284 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 6 00:24:08.088293 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 6 00:24:08.088302 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 6 00:24:08.088311 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 6 00:24:08.088320 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 6 00:24:08.088329 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 6 00:24:08.088345 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 6 00:24:08.088354 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 6 00:24:08.088364 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:24:08.088371 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:24:08.088379 kernel: efi: EFI v2.7 by EDK II Nov 6 00:24:08.088386 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 6 00:24:08.088396 kernel: random: crng init done Nov 6 00:24:08.088403 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 6 00:24:08.088411 kernel: secureboot: Secure boot enabled Nov 6 00:24:08.088418 kernel: SMBIOS 2.8 present. Nov 6 00:24:08.088426 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 6 00:24:08.088433 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:24:08.088440 kernel: Hypervisor detected: KVM Nov 6 00:24:08.088448 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 6 00:24:08.088455 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:24:08.088463 kernel: kvm-clock: using sched offset of 8489542306 cycles Nov 6 00:24:08.088471 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:24:08.088489 kernel: tsc: Detected 2794.748 MHz processor Nov 6 00:24:08.088497 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:24:08.088505 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:24:08.088513 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 6 00:24:08.088524 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 00:24:08.088532 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:24:08.088542 kernel: Using GB pages for direct mapping Nov 6 00:24:08.088549 kernel: ACPI: Early table checksum verification disabled Nov 6 00:24:08.088557 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 6 00:24:08.088568 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 6 00:24:08.088575 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:24:08.088590 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:24:08.088598 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 6 00:24:08.088605 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:24:08.088613 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:24:08.088636 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:24:08.088649 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:24:08.088657 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 6 00:24:08.088674 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 6 00:24:08.088683 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 6 00:24:08.088694 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 6 00:24:08.088703 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 6 00:24:08.088713 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 6 00:24:08.088723 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 6 00:24:08.088731 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 6 00:24:08.088738 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 6 00:24:08.088746 kernel: No NUMA configuration found Nov 6 00:24:08.088760 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 6 00:24:08.088768 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 6 00:24:08.088775 kernel: Zone ranges: Nov 6 00:24:08.088783 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:24:08.088791 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 6 00:24:08.088798 kernel: Normal empty Nov 6 00:24:08.088805 kernel: Device empty Nov 6 00:24:08.088813 kernel: Movable zone start for each node Nov 6 00:24:08.088820 kernel: Early memory node ranges Nov 6 00:24:08.088828 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 6 00:24:08.088837 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 6 00:24:08.088845 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 6 00:24:08.088852 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 6 00:24:08.088860 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 6 00:24:08.088867 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 6 00:24:08.088875 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:24:08.088882 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 6 00:24:08.088890 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:24:08.088897 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 6 00:24:08.088907 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 6 00:24:08.088915 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 6 00:24:08.088930 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:24:08.088938 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:24:08.088948 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:24:08.088956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:24:08.088966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:24:08.088974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:24:08.088981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:24:08.088991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:24:08.088999 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:24:08.089006 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:24:08.089014 kernel: TSC deadline timer available Nov 6 00:24:08.089021 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:24:08.089031 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:24:08.089046 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:24:08.089056 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:24:08.089064 kernel: CPU topo: Num. cores per package: 4 Nov 6 00:24:08.089071 kernel: CPU topo: Num. threads per package: 4 Nov 6 00:24:08.089090 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 00:24:08.089099 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:24:08.089110 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:24:08.089118 kernel: kvm-guest: setup PV sched yield Nov 6 00:24:08.089126 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 6 00:24:08.089134 kernel: Booting paravirtualized kernel on KVM Nov 6 00:24:08.089142 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:24:08.089152 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 00:24:08.089161 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 00:24:08.089168 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 00:24:08.089176 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 00:24:08.089184 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:24:08.089192 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:24:08.089201 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:24:08.089209 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:24:08.089220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:24:08.089227 kernel: Fallback order for Node 0: 0 Nov 6 00:24:08.089235 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 6 00:24:08.089243 kernel: Policy zone: DMA32 Nov 6 00:24:08.089251 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:24:08.089259 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 00:24:08.089267 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:24:08.089275 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:24:08.089282 kernel: Dynamic Preempt: voluntary Nov 6 00:24:08.089292 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:24:08.089301 kernel: rcu: RCU event tracing is enabled. Nov 6 00:24:08.089309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 00:24:08.089317 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:24:08.089325 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:24:08.089333 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:24:08.089341 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:24:08.089349 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 00:24:08.089357 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:24:08.089367 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:24:08.089378 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:24:08.089386 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 00:24:08.089394 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:24:08.089402 kernel: Console: colour dummy device 80x25 Nov 6 00:24:08.089409 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:24:08.089417 kernel: ACPI: Core revision 20240827 Nov 6 00:24:08.089425 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:24:08.089433 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:24:08.089443 kernel: x2apic enabled Nov 6 00:24:08.089451 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:24:08.089459 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:24:08.089467 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:24:08.089475 kernel: kvm-guest: setup PV IPIs Nov 6 00:24:08.089483 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:24:08.089491 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:24:08.089499 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 00:24:08.089507 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:24:08.089517 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:24:08.089525 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:24:08.089533 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:24:08.089541 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:24:08.089549 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:24:08.089557 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:24:08.089565 kernel: active return thunk: retbleed_return_thunk Nov 6 00:24:08.089573 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:24:08.089581 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:24:08.089591 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:24:08.089599 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:24:08.089607 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:24:08.089615 kernel: active return thunk: srso_return_thunk Nov 6 00:24:08.089637 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:24:08.089645 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:24:08.089653 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:24:08.089660 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:24:08.089671 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:24:08.089679 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:24:08.089687 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:24:08.089694 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:24:08.089702 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:24:08.089710 kernel: landlock: Up and running. Nov 6 00:24:08.089718 kernel: SELinux: Initializing. Nov 6 00:24:08.089726 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:24:08.089734 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:24:08.089744 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:24:08.089752 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:24:08.089759 kernel: ... version: 0 Nov 6 00:24:08.089770 kernel: ... bit width: 48 Nov 6 00:24:08.089778 kernel: ... generic registers: 6 Nov 6 00:24:08.089785 kernel: ... value mask: 0000ffffffffffff Nov 6 00:24:08.089793 kernel: ... max period: 00007fffffffffff Nov 6 00:24:08.089801 kernel: ... fixed-purpose events: 0 Nov 6 00:24:08.089809 kernel: ... event mask: 000000000000003f Nov 6 00:24:08.089819 kernel: signal: max sigframe size: 1776 Nov 6 00:24:08.089826 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:24:08.089834 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:24:08.089842 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:24:08.089850 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:24:08.089858 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:24:08.089866 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 00:24:08.089874 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 00:24:08.089882 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 00:24:08.089890 kernel: Memory: 2403072K/2552216K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 143208K reserved, 0K cma-reserved) Nov 6 00:24:08.089900 kernel: devtmpfs: initialized Nov 6 00:24:08.089908 kernel: x86/mm: Memory block size: 128MB Nov 6 00:24:08.089916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 6 00:24:08.089930 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 6 00:24:08.089938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:24:08.089946 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 00:24:08.089960 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:24:08.089970 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:24:08.089981 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:24:08.089989 kernel: audit: type=2000 audit(1762388643.903:1): state=initialized audit_enabled=0 res=1 Nov 6 00:24:08.089997 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:24:08.090004 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:24:08.090012 kernel: cpuidle: using governor menu Nov 6 00:24:08.090020 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:24:08.090028 kernel: dca service started, version 1.12.1 Nov 6 00:24:08.090036 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 6 00:24:08.090044 kernel: PCI: Using configuration type 1 for base access Nov 6 00:24:08.090054 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:24:08.090062 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:24:08.090070 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:24:08.090078 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:24:08.090085 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:24:08.090093 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:24:08.090101 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:24:08.090109 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:24:08.090117 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:24:08.090127 kernel: ACPI: Interpreter enabled Nov 6 00:24:08.090134 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:24:08.090142 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:24:08.090150 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:24:08.090158 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:24:08.090166 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:24:08.090174 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:24:08.090475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:24:08.090618 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:24:08.090760 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:24:08.090771 kernel: PCI host bridge to bus 0000:00 Nov 6 00:24:08.090919 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:24:08.091048 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:24:08.091162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:24:08.091310 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 6 00:24:08.091495 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 6 00:24:08.091610 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 6 00:24:08.091760 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:24:08.092512 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:24:08.092704 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:24:08.092851 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 6 00:24:08.093012 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 6 00:24:08.093137 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 6 00:24:08.093267 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:24:08.093430 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:24:08.093555 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 6 00:24:08.093703 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 6 00:24:08.093828 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 6 00:24:08.093983 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:24:08.094109 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 6 00:24:08.094233 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 6 00:24:08.094475 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 6 00:24:08.094696 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:24:08.094838 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 6 00:24:08.094988 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 6 00:24:08.095119 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 6 00:24:08.095246 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 6 00:24:08.095418 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:24:08.095541 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:24:08.095695 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:24:08.095823 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 6 00:24:08.095963 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 6 00:24:08.096111 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:24:08.096243 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 6 00:24:08.096256 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:24:08.096264 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:24:08.096272 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:24:08.096281 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:24:08.096289 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:24:08.096297 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:24:08.096311 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:24:08.096321 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:24:08.096332 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:24:08.096342 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:24:08.096352 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:24:08.096362 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:24:08.096372 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:24:08.096382 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:24:08.096392 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:24:08.096405 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:24:08.096415 kernel: iommu: Default domain type: Translated Nov 6 00:24:08.096425 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:24:08.096435 kernel: efivars: Registered efivars operations Nov 6 00:24:08.096446 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:24:08.096456 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:24:08.096469 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 6 00:24:08.096479 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Nov 6 00:24:08.096489 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Nov 6 00:24:08.096502 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 6 00:24:08.096512 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 6 00:24:08.096671 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:24:08.096799 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:24:08.096931 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:24:08.096942 kernel: vgaarb: loaded Nov 6 00:24:08.096951 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:24:08.096960 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:24:08.096972 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:24:08.096980 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:24:08.096988 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:24:08.096997 kernel: pnp: PnP ACPI init Nov 6 00:24:08.097145 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 6 00:24:08.097157 kernel: pnp: PnP ACPI: found 6 devices Nov 6 00:24:08.097166 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:24:08.097174 kernel: NET: Registered PF_INET protocol family Nov 6 00:24:08.097186 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:24:08.097194 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:24:08.097203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:24:08.097211 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:24:08.097219 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:24:08.097227 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:24:08.097236 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:24:08.097244 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:24:08.097252 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:24:08.097263 kernel: NET: Registered PF_XDP protocol family Nov 6 00:24:08.097390 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 6 00:24:08.097514 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 6 00:24:08.097651 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:24:08.097765 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:24:08.097903 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:24:08.098026 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 6 00:24:08.098140 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 6 00:24:08.098256 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 6 00:24:08.098267 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:24:08.098276 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:24:08.098284 kernel: Initialise system trusted keyrings Nov 6 00:24:08.098292 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:24:08.098301 kernel: Key type asymmetric registered Nov 6 00:24:08.098310 kernel: Asymmetric key parser 'x509' registered Nov 6 00:24:08.098339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:24:08.098355 kernel: io scheduler mq-deadline registered Nov 6 00:24:08.098366 kernel: io scheduler kyber registered Nov 6 00:24:08.098376 kernel: io scheduler bfq registered Nov 6 00:24:08.098387 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:24:08.098398 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:24:08.098409 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:24:08.098420 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 00:24:08.098430 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:24:08.098441 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:24:08.098451 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:24:08.098464 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:24:08.098475 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:24:08.098655 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 00:24:08.098669 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:24:08.098787 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 00:24:08.098904 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T00:24:07 UTC (1762388647) Nov 6 00:24:08.099031 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 6 00:24:08.099046 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:24:08.099055 kernel: efifb: probing for efifb Nov 6 00:24:08.099064 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 6 00:24:08.099072 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 6 00:24:08.099080 kernel: efifb: scrolling: redraw Nov 6 00:24:08.099089 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 00:24:08.099098 kernel: Console: switching to colour frame buffer device 160x50 Nov 6 00:24:08.099109 kernel: fb0: EFI VGA frame buffer device Nov 6 00:24:08.099118 kernel: pstore: Using crash dump compression: deflate Nov 6 00:24:08.099126 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:24:08.099134 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:24:08.099143 kernel: Segment Routing with IPv6 Nov 6 00:24:08.099151 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:24:08.099160 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:24:08.099169 kernel: Key type dns_resolver registered Nov 6 00:24:08.099179 kernel: IPI shorthand broadcast: enabled Nov 6 00:24:08.099188 kernel: sched_clock: Marking stable (4642005839, 281521516)->(5057376657, -133849302) Nov 6 00:24:08.099196 kernel: registered taskstats version 1 Nov 6 00:24:08.099205 kernel: Loading compiled-in X.509 certificates Nov 6 00:24:08.099214 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:24:08.099222 kernel: Demotion targets for Node 0: null Nov 6 00:24:08.099231 kernel: Key type .fscrypt registered Nov 6 00:24:08.099239 kernel: Key type fscrypt-provisioning registered Nov 6 00:24:08.099250 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:24:08.099263 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:24:08.099274 kernel: ima: No architecture policies found Nov 6 00:24:08.099284 kernel: clk: Disabling unused clocks Nov 6 00:24:08.099295 kernel: Warning: unable to open an initial console. Nov 6 00:24:08.099307 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:24:08.099320 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:24:08.099331 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:24:08.099342 kernel: Run /init as init process Nov 6 00:24:08.099352 kernel: with arguments: Nov 6 00:24:08.099365 kernel: /init Nov 6 00:24:08.099376 kernel: with environment: Nov 6 00:24:08.099386 kernel: HOME=/ Nov 6 00:24:08.099395 kernel: TERM=linux Nov 6 00:24:08.099404 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:24:08.099417 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:24:08.099426 systemd[1]: Detected virtualization kvm. Nov 6 00:24:08.099437 systemd[1]: Detected architecture x86-64. Nov 6 00:24:08.099446 systemd[1]: Running in initrd. Nov 6 00:24:08.099454 systemd[1]: No hostname configured, using default hostname. Nov 6 00:24:08.099464 systemd[1]: Hostname set to . Nov 6 00:24:08.099472 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:24:08.099481 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:24:08.099490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:24:08.099499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:24:08.099511 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:24:08.099520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:24:08.099529 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:24:08.099539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:24:08.099549 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:24:08.099558 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:24:08.099567 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:24:08.099578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:24:08.099587 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:24:08.099596 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:24:08.099605 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:24:08.099614 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:24:08.099636 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:24:08.099645 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:24:08.099654 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:24:08.099663 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:24:08.099674 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:24:08.099683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:24:08.099779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:24:08.099789 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:24:08.099798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:24:08.099813 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:24:08.099822 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:24:08.099831 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:24:08.099845 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:24:08.099854 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:24:08.099863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:24:08.099872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:08.099912 systemd-journald[201]: Collecting audit messages is disabled. Nov 6 00:24:08.099948 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:24:08.099960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:24:08.099972 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:24:08.099987 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:24:08.099999 systemd-journald[201]: Journal started Nov 6 00:24:08.100023 systemd-journald[201]: Runtime Journal (/run/log/journal/bfc6aeb3d8f4452e9f1107adf29fe083) is 5.9M, max 47.9M, 41.9M free. Nov 6 00:24:08.107198 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:24:08.111144 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:24:08.154616 systemd-modules-load[204]: Inserted module 'overlay' Nov 6 00:24:08.159086 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:24:08.162763 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:24:08.168121 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:24:08.178006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:24:08.185465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:08.192080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:24:08.201690 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:24:08.201723 kernel: Bridge firewalling registered Nov 6 00:24:08.199179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:24:08.199546 systemd-modules-load[204]: Inserted module 'br_netfilter' Nov 6 00:24:08.201906 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:24:08.218500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:24:08.231089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:08.233091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:24:08.239280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:24:08.244006 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:24:08.263405 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:24:08.282544 systemd-resolved[242]: Positive Trust Anchors: Nov 6 00:24:08.282572 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:24:08.282610 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:24:08.285460 systemd-resolved[242]: Defaulting to hostname 'linux'. Nov 6 00:24:08.286778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:24:08.314053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:24:08.439692 kernel: SCSI subsystem initialized Nov 6 00:24:08.449866 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:24:08.463678 kernel: iscsi: registered transport (tcp) Nov 6 00:24:08.492665 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:24:08.492736 kernel: QLogic iSCSI HBA Driver Nov 6 00:24:08.518593 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:24:08.540072 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:24:08.540864 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:24:08.636486 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:24:08.638572 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:24:08.707677 kernel: raid6: avx2x4 gen() 25350 MB/s Nov 6 00:24:08.724670 kernel: raid6: avx2x2 gen() 25012 MB/s Nov 6 00:24:08.742695 kernel: raid6: avx2x1 gen() 18224 MB/s Nov 6 00:24:08.742749 kernel: raid6: using algorithm avx2x4 gen() 25350 MB/s Nov 6 00:24:08.768408 kernel: raid6: .... xor() 5126 MB/s, rmw enabled Nov 6 00:24:08.768440 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:24:08.805662 kernel: xor: automatically using best checksumming function avx Nov 6 00:24:09.015672 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:24:09.026029 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:24:09.028282 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:24:09.069469 systemd-udevd[452]: Using default interface naming scheme 'v255'. Nov 6 00:24:09.075549 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:24:09.076717 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:24:09.111403 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Nov 6 00:24:09.148175 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:24:09.151782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:24:09.288220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:24:09.291807 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:24:09.363668 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 00:24:09.378660 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:24:09.382647 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:24:09.382677 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 6 00:24:09.392000 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:24:09.392055 kernel: GPT:9289727 != 19775487 Nov 6 00:24:09.392066 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:24:09.392077 kernel: GPT:9289727 != 19775487 Nov 6 00:24:09.392087 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:24:09.393882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:24:09.399005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:24:09.399183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:09.405465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:09.410083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:09.412981 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:24:09.431993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:24:09.432140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:09.443545 kernel: AES CTR mode by8 optimization enabled Nov 6 00:24:09.438804 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:24:09.451647 kernel: libata version 3.00 loaded. Nov 6 00:24:09.474619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:24:09.536775 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:24:09.554658 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:24:09.555058 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:24:09.565163 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:24:09.565670 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:24:09.565864 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:24:09.566058 kernel: scsi host0: ahci Nov 6 00:24:09.567736 kernel: scsi host1: ahci Nov 6 00:24:09.567989 kernel: scsi host2: ahci Nov 6 00:24:09.570166 kernel: scsi host3: ahci Nov 6 00:24:09.570387 kernel: scsi host4: ahci Nov 6 00:24:09.570592 kernel: scsi host5: ahci Nov 6 00:24:09.572616 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Nov 6 00:24:09.572661 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Nov 6 00:24:09.574459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:24:09.584738 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Nov 6 00:24:09.584761 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Nov 6 00:24:09.584771 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Nov 6 00:24:09.584782 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Nov 6 00:24:09.591973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:24:09.596366 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 00:24:09.603232 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:24:09.608091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:09.649161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:09.798997 disk-uuid[613]: Primary Header is updated. Nov 6 00:24:09.798997 disk-uuid[613]: Secondary Entries is updated. Nov 6 00:24:09.798997 disk-uuid[613]: Secondary Header is updated. Nov 6 00:24:09.804753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:24:09.886657 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:24:09.886717 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:24:09.886729 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:24:09.886912 kernel: ata3.00: applying bridge limits Nov 6 00:24:09.889108 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:24:09.889138 kernel: ata3.00: configured for UDMA/100 Nov 6 00:24:09.891557 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:24:09.893663 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:24:09.893694 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:24:09.894650 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:24:09.899288 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:24:09.899317 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:24:09.951347 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:24:09.951658 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:24:09.971918 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:24:10.425646 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:24:10.428797 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:24:10.431717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:24:10.435710 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:24:10.437430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:24:10.465884 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:24:10.829417 disk-uuid[618]: The operation has completed successfully. Nov 6 00:24:10.831559 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:24:10.870258 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:24:10.876459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:24:10.926676 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:24:10.954880 sh[647]: Success Nov 6 00:24:10.988144 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:24:10.988237 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:24:10.990215 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:24:11.001676 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:24:11.041606 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:24:11.047406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:24:11.069668 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:24:11.078663 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (659) Nov 6 00:24:11.078710 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:24:11.082657 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:11.089684 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:24:11.089797 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:24:11.091916 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:24:11.106154 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:24:11.108508 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:24:11.109693 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:24:11.125095 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:24:11.151691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (682) Nov 6 00:24:11.151790 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:11.153676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:11.197686 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:24:11.197778 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:24:11.205659 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:11.206242 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:24:11.210226 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:24:11.283706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:24:11.350753 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:24:11.481101 systemd-networkd[830]: lo: Link UP Nov 6 00:24:11.481113 systemd-networkd[830]: lo: Gained carrier Nov 6 00:24:11.482821 systemd-networkd[830]: Enumeration completed Nov 6 00:24:11.482984 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:24:11.489900 ignition[771]: Ignition 2.22.0 Nov 6 00:24:11.483285 systemd-networkd[830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:24:11.489910 ignition[771]: Stage: fetch-offline Nov 6 00:24:11.483289 systemd-networkd[830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:24:11.489974 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:11.484935 systemd-networkd[830]: eth0: Link UP Nov 6 00:24:11.489988 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:24:11.485144 systemd-networkd[830]: eth0: Gained carrier Nov 6 00:24:11.490103 ignition[771]: parsed url from cmdline: "" Nov 6 00:24:11.485153 systemd-networkd[830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:24:11.490108 ignition[771]: no config URL provided Nov 6 00:24:11.486305 systemd[1]: Reached target network.target - Network. Nov 6 00:24:11.490116 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:24:11.505761 systemd-networkd[830]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:24:11.490132 ignition[771]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:24:11.490164 ignition[771]: op(1): [started] loading QEMU firmware config module Nov 6 00:24:11.490171 ignition[771]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 00:24:11.526301 ignition[771]: op(1): [finished] loading QEMU firmware config module Nov 6 00:24:11.617256 ignition[771]: parsing config with SHA512: 86d8e5014e69020a567e065d7bf083fb85449dab227e6bcb41b5fc71fac5eab92fce8329b610499c93377fd85c030e4dd41f3ea6a35f80d7be0e6cb00ce251f2 Nov 6 00:24:11.678946 unknown[771]: fetched base config from "system" Nov 6 00:24:11.678962 unknown[771]: fetched user config from "qemu" Nov 6 00:24:11.679355 ignition[771]: fetch-offline: fetch-offline passed Nov 6 00:24:11.682597 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:24:11.679414 ignition[771]: Ignition finished successfully Nov 6 00:24:11.697562 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:24:11.698713 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:24:11.755034 ignition[843]: Ignition 2.22.0 Nov 6 00:24:11.755047 ignition[843]: Stage: kargs Nov 6 00:24:11.791441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:24:11.755194 ignition[843]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:11.795195 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:24:11.755205 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:24:11.756216 ignition[843]: kargs: kargs passed Nov 6 00:24:11.756264 ignition[843]: Ignition finished successfully Nov 6 00:24:11.921387 ignition[851]: Ignition 2.22.0 Nov 6 00:24:11.921400 ignition[851]: Stage: disks Nov 6 00:24:11.921555 ignition[851]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:11.921565 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:24:11.922277 ignition[851]: disks: disks passed Nov 6 00:24:11.922323 ignition[851]: Ignition finished successfully Nov 6 00:24:11.928280 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:24:11.931046 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:24:11.934120 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:24:11.934459 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:24:11.940919 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:24:11.944275 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:24:11.949175 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:24:12.018925 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 6 00:24:12.344007 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:24:12.351026 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:24:12.541673 kernel: EXT4-fs (vda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:24:12.543302 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:24:12.545805 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:24:12.549844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:24:12.553269 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:24:12.557170 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:24:12.557237 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:24:12.557272 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:24:12.585778 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Nov 6 00:24:12.585825 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:12.585847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:12.585863 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:24:12.585877 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:24:12.565230 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:24:12.572985 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:24:12.590741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:24:12.630652 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:24:12.636648 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:24:12.757804 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:24:12.763343 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:24:12.879827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:24:12.885928 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:24:12.890976 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:24:12.929026 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:24:12.931611 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:12.953408 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:24:13.051153 ignition[983]: INFO : Ignition 2.22.0 Nov 6 00:24:13.051153 ignition[983]: INFO : Stage: mount Nov 6 00:24:13.053956 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:13.053956 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:24:13.053956 ignition[983]: INFO : mount: mount passed Nov 6 00:24:13.053956 ignition[983]: INFO : Ignition finished successfully Nov 6 00:24:13.062763 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:24:13.066215 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:24:13.411910 systemd-networkd[830]: eth0: Gained IPv6LL Nov 6 00:24:13.544591 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:24:13.595924 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Nov 6 00:24:13.595984 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:13.595996 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:13.653300 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:24:13.653391 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:24:13.655382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:24:13.698872 ignition[1012]: INFO : Ignition 2.22.0 Nov 6 00:24:13.698872 ignition[1012]: INFO : Stage: files Nov 6 00:24:13.701826 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:13.701826 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:24:13.701826 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:24:13.701826 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:24:13.701826 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:24:13.712385 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:24:13.712385 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:24:13.712385 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:24:13.712385 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:24:13.712385 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:24:13.703467 unknown[1012]: wrote ssh authorized keys file for user: core Nov 6 00:24:13.804942 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:24:13.928161 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:24:13.928161 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:24:13.935785 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 00:24:14.147935 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 00:24:14.348399 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:24:14.348399 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:24:14.359820 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:24:14.493235 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:24:14.496724 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:24:14.499942 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:14.597131 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:14.597131 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:14.606435 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 00:24:14.977875 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 00:24:15.327706 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:15.327706 ignition[1012]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 00:24:15.335083 ignition[1012]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:24:15.365992 ignition[1012]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:24:15.371776 ignition[1012]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:24:15.374593 ignition[1012]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:24:15.374593 ignition[1012]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:24:15.374593 ignition[1012]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:24:15.374593 ignition[1012]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:24:15.374593 ignition[1012]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:24:15.374593 ignition[1012]: INFO : files: files passed Nov 6 00:24:15.374593 ignition[1012]: INFO : Ignition finished successfully Nov 6 00:24:15.380114 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:24:15.386751 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:24:15.390166 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:24:15.408676 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:24:15.408852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:24:15.414331 initrd-setup-root-after-ignition[1041]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 00:24:15.416644 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:24:15.416644 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:24:15.422444 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:24:15.426930 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:24:15.429404 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:24:15.436084 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:24:15.493386 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:24:15.493526 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:24:15.495408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:24:15.499333 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:24:15.502580 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:24:15.505890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:24:15.543968 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:24:15.545683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:24:15.572649 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:24:15.572931 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:24:15.578402 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:24:15.581796 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:24:15.581943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:24:15.583877 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:24:15.584471 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:24:15.585062 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:24:15.585598 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:24:15.602473 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:24:15.602729 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:24:15.606468 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:24:15.613416 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:24:15.615784 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:24:15.619472 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:24:15.622855 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:24:15.624489 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:24:15.624776 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:24:15.632046 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:24:15.632260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:24:15.635934 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:24:15.636126 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:24:15.639955 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:24:15.640109 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:24:15.647752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:24:15.647890 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:24:15.649528 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:24:15.654473 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:24:15.654745 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:24:15.656461 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:24:15.661263 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:24:15.665669 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:24:15.665814 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:24:15.666484 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:24:15.666582 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:24:15.667149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:24:15.667277 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:24:15.678410 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:24:15.678554 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:24:15.686770 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:24:15.693702 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:24:15.695324 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:24:15.695517 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:24:15.696316 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:24:15.696418 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:24:15.707020 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:24:15.713916 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:24:15.738587 ignition[1068]: INFO : Ignition 2.22.0 Nov 6 00:24:15.738587 ignition[1068]: INFO : Stage: umount Nov 6 00:24:15.741444 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:15.741444 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:24:15.740045 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:24:15.747053 ignition[1068]: INFO : umount: umount passed Nov 6 00:24:15.747053 ignition[1068]: INFO : Ignition finished successfully Nov 6 00:24:15.747271 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:24:15.747421 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:24:15.751269 systemd[1]: Stopped target network.target - Network. Nov 6 00:24:15.754341 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:24:15.754443 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:24:15.758002 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:24:15.758082 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:24:15.760060 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:24:15.760129 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:24:15.765066 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:24:15.765131 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:24:15.766990 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:24:15.770493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:24:15.778786 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:24:15.779052 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:24:15.796105 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:24:15.796450 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:24:15.796584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:24:15.803171 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:24:15.803951 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:24:15.808108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:24:15.808165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:24:15.812369 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:24:15.814501 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:24:15.814559 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:24:15.815590 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:24:15.815654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:15.821698 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:24:15.821766 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:24:15.825178 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:24:15.825234 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:24:15.831087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:24:15.838086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:24:15.838178 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:24:15.856716 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:24:15.868000 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:24:15.871201 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:24:15.871285 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:24:15.875229 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:24:15.875303 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:24:15.877160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:24:15.877252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:24:15.883177 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:24:15.883237 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:24:15.888897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:24:15.888951 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:24:15.900493 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:24:15.900565 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:24:15.900620 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:24:15.908899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:24:15.908971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:24:15.915834 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:24:15.915906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:24:15.921766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:24:15.921864 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:24:15.924750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:24:15.924846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:15.934525 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:24:15.934601 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 6 00:24:15.934679 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:24:15.934780 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:24:15.935252 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:24:15.935399 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:24:15.943170 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:24:15.943314 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:24:15.964475 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:24:15.964710 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:24:15.967002 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:24:15.969492 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:24:15.969575 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:24:15.978015 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:24:16.013172 systemd[1]: Switching root. Nov 6 00:24:16.060822 systemd-journald[201]: Journal stopped Nov 6 00:24:17.702430 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 6 00:24:17.702499 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:24:17.702518 kernel: SELinux: policy capability open_perms=1 Nov 6 00:24:17.702530 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:24:17.702547 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:24:17.702558 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:24:17.702570 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:24:17.702591 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:24:17.702602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:24:17.702632 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:24:17.702645 kernel: audit: type=1403 audit(1762388656.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:24:17.702670 systemd[1]: Successfully loaded SELinux policy in 67.319ms. Nov 6 00:24:17.702690 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.993ms. Nov 6 00:24:17.702705 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:24:17.702717 systemd[1]: Detected virtualization kvm. Nov 6 00:24:17.702730 systemd[1]: Detected architecture x86-64. Nov 6 00:24:17.702742 systemd[1]: Detected first boot. Nov 6 00:24:17.702754 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:24:17.702766 zram_generator::config[1112]: No configuration found. Nov 6 00:24:17.702782 kernel: Guest personality initialized and is inactive Nov 6 00:24:17.702793 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:24:17.702805 kernel: Initialized host personality Nov 6 00:24:17.702816 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:24:17.702830 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:24:17.702843 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:24:17.702856 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:24:17.702869 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:24:17.702881 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:24:17.702899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:24:17.702912 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:24:17.702924 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:24:17.702936 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:24:17.702949 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:24:17.702961 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:24:17.702973 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:24:17.702991 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:24:17.703003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:24:17.703018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:24:17.703030 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:24:17.703043 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:24:17.703055 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:24:17.703068 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:24:17.703080 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:24:17.703092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:24:17.703108 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:24:17.703120 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:24:17.703133 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:24:17.703145 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:24:17.703157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:24:17.703170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:24:17.703182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:24:17.703194 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:24:17.703206 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:24:17.703218 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:24:17.703232 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:24:17.703245 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:24:17.703258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:24:17.703270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:24:17.703283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:24:17.703295 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:24:17.703307 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:24:17.703320 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:24:17.703332 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:24:17.703347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:17.703359 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:24:17.703372 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:24:17.703385 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:24:17.703398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:24:17.703410 systemd[1]: Reached target machines.target - Containers. Nov 6 00:24:17.703422 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:24:17.703434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:24:17.703450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:24:17.703462 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:24:17.703474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:24:17.703487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:24:17.703499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:24:17.703512 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:24:17.703524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:24:17.703537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:24:17.703551 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:24:17.703564 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:24:17.703576 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:24:17.703588 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:24:17.703601 kernel: loop: module loaded Nov 6 00:24:17.703613 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:24:17.703657 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:24:17.703671 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:24:17.703683 kernel: fuse: init (API version 7.41) Nov 6 00:24:17.703699 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:24:17.703711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:24:17.703724 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:24:17.703738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:24:17.703753 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:24:17.703766 systemd[1]: Stopped verity-setup.service. Nov 6 00:24:17.703778 kernel: ACPI: bus type drm_connector registered Nov 6 00:24:17.703791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:17.703804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:24:17.703816 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:24:17.703853 systemd-journald[1197]: Collecting audit messages is disabled. Nov 6 00:24:17.703877 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:24:17.703889 systemd-journald[1197]: Journal started Nov 6 00:24:17.703912 systemd-journald[1197]: Runtime Journal (/run/log/journal/bfc6aeb3d8f4452e9f1107adf29fe083) is 5.9M, max 47.9M, 41.9M free. Nov 6 00:24:17.339023 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:24:17.360481 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:24:17.361281 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:24:17.708675 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:24:17.711463 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:24:17.713966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:24:17.716502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:24:17.718992 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:24:17.721941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:24:17.725014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:24:17.725255 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:24:17.728167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:24:17.728397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:24:17.731211 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:24:17.731468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:24:17.734094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:24:17.734358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:24:17.737388 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:24:17.737866 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:24:17.740906 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:24:17.741181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:24:17.744195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:24:17.747120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:24:17.750278 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:24:17.753485 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:24:17.770224 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:24:17.774479 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:24:17.778208 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:24:17.780771 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:24:17.780813 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:24:17.784311 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:24:17.788786 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:24:17.791371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:24:17.793169 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:24:17.796927 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:24:17.799560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:24:17.800994 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:24:17.803466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:24:17.804789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:24:17.816825 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:24:17.821886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:24:17.828260 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:24:17.836975 systemd-journald[1197]: Time spent on flushing to /var/log/journal/bfc6aeb3d8f4452e9f1107adf29fe083 is 61.943ms for 1050 entries. Nov 6 00:24:17.836975 systemd-journald[1197]: System Journal (/var/log/journal/bfc6aeb3d8f4452e9f1107adf29fe083) is 8M, max 195.6M, 187.6M free. Nov 6 00:24:18.008716 systemd-journald[1197]: Received client request to flush runtime journal. Nov 6 00:24:17.997293 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:24:18.000850 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:24:18.003947 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:24:18.012527 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:24:18.016969 kernel: loop0: detected capacity change from 0 to 110984 Nov 6 00:24:18.023870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:24:18.029009 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:24:18.040317 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 6 00:24:18.042354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:18.044756 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 6 00:24:18.053769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:24:18.054611 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:24:18.060167 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:24:18.120670 kernel: loop1: detected capacity change from 0 to 128016 Nov 6 00:24:18.171909 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:24:18.177780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:24:18.183521 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:24:18.222674 kernel: loop2: detected capacity change from 0 to 219144 Nov 6 00:24:18.234808 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 00:24:18.234837 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 00:24:18.242984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:24:18.264681 kernel: loop3: detected capacity change from 0 to 110984 Nov 6 00:24:18.280931 kernel: loop4: detected capacity change from 0 to 128016 Nov 6 00:24:18.299698 kernel: loop5: detected capacity change from 0 to 219144 Nov 6 00:24:18.309786 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 6 00:24:18.310576 (sd-merge)[1259]: Merged extensions into '/usr'. Nov 6 00:24:18.319173 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:24:18.319378 systemd[1]: Reloading... Nov 6 00:24:18.455691 zram_generator::config[1285]: No configuration found. Nov 6 00:24:18.664677 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:24:18.774353 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:24:18.774768 systemd[1]: Reloading finished in 454 ms. Nov 6 00:24:18.807490 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:24:18.810231 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:24:18.828836 systemd[1]: Starting ensure-sysext.service... Nov 6 00:24:18.832533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:24:18.872034 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:24:18.872118 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:24:18.872690 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:24:18.873127 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:24:18.874740 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:24:18.875173 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Nov 6 00:24:18.875277 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Nov 6 00:24:18.883933 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:24:18.883950 systemd-tmpfiles[1323]: Skipping /boot Nov 6 00:24:18.899932 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:24:18.899955 systemd-tmpfiles[1323]: Skipping /boot Nov 6 00:24:18.902571 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:24:18.905855 systemd[1]: Reload requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:24:18.905879 systemd[1]: Reloading... Nov 6 00:24:18.974728 zram_generator::config[1353]: No configuration found. Nov 6 00:24:19.201423 systemd[1]: Reloading finished in 294 ms. Nov 6 00:24:19.226677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:24:19.255216 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:24:19.259946 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:24:19.263448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:24:19.273651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:24:19.279153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:24:19.284285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:24:19.294004 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:24:19.298261 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:19.298438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:24:19.300885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:24:19.309838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:24:19.314819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:24:19.317209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:24:19.317405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:24:19.317674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:19.319140 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:24:19.323060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:24:19.323349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:24:19.336701 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:24:19.343077 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Nov 6 00:24:19.345276 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:24:19.349185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:24:19.349614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:24:19.353276 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:24:19.353534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:24:19.361393 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:19.362535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:24:19.364884 augenrules[1423]: No rules Nov 6 00:24:19.365259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:24:19.367758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:24:19.367929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:24:19.368067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:24:19.368187 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:19.369301 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:24:19.369679 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:24:19.372335 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:24:19.379863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:19.383002 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:24:19.385332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:24:19.386751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:24:19.392113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:24:19.405253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:24:19.407523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:24:19.407700 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:24:19.407878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:24:19.409102 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:24:19.411648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:24:19.433834 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:24:19.440415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:24:19.441675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:24:19.444567 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:24:19.445200 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:24:19.448015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:24:19.448241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:24:19.451164 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:24:19.451666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:24:19.464573 augenrules[1431]: /sbin/augenrules: No change Nov 6 00:24:19.465276 systemd[1]: Finished ensure-sysext.service. Nov 6 00:24:19.476167 augenrules[1488]: No rules Nov 6 00:24:19.478132 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:24:19.478834 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:24:19.489724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:24:19.491970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:24:19.492087 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:24:19.495943 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:24:19.498428 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:24:19.525486 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:24:19.605434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:24:19.611720 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:24:19.640700 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:24:19.645662 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:24:19.664381 systemd-resolved[1391]: Positive Trust Anchors: Nov 6 00:24:19.664399 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:24:19.664429 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:24:19.672972 systemd-resolved[1391]: Defaulting to hostname 'linux'. Nov 6 00:24:19.676368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:24:19.679079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:24:19.696655 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:24:19.715821 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:24:19.716554 systemd-networkd[1495]: lo: Link UP Nov 6 00:24:19.716569 systemd-networkd[1495]: lo: Gained carrier Nov 6 00:24:19.719206 systemd-networkd[1495]: Enumeration completed Nov 6 00:24:19.719351 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:24:19.722022 systemd[1]: Reached target network.target - Network. Nov 6 00:24:19.723235 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:24:19.723244 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:24:19.730950 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 6 00:24:19.731305 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:24:19.731507 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:24:19.725930 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:24:19.732411 systemd-networkd[1495]: eth0: Link UP Nov 6 00:24:19.732604 systemd-networkd[1495]: eth0: Gained carrier Nov 6 00:24:19.733644 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:24:19.804764 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:24:19.807256 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:24:19.809779 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:24:19.812807 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:24:19.815234 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:24:19.817776 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:24:19.818700 systemd-networkd[1495]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:24:19.819498 systemd-timesyncd[1496]: Network configuration changed, trying to establish connection. Nov 6 00:24:19.820240 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:24:19.820338 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:24:19.820375 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:24:19.821007 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:24:20.895770 systemd-timesyncd[1496]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 00:24:20.895844 systemd-timesyncd[1496]: Initial clock synchronization to Thu 2025-11-06 00:24:20.895635 UTC. Nov 6 00:24:20.897361 systemd-resolved[1391]: Clock change detected. Flushing caches. Nov 6 00:24:20.897845 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:24:20.898068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:24:20.898634 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:24:20.902178 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:24:20.916477 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:24:20.921034 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:24:20.924944 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:24:20.927374 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:24:20.934530 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:24:20.937364 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:24:20.941464 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:24:20.969051 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:24:21.022266 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:24:21.032798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:24:21.032865 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:24:21.034849 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:24:21.039395 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:24:21.083594 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:24:21.087343 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:24:21.093035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:24:21.094263 kernel: kvm_amd: TSC scaling supported Nov 6 00:24:21.094305 kernel: kvm_amd: Nested Virtualization enabled Nov 6 00:24:21.094324 kernel: kvm_amd: Nested Paging enabled Nov 6 00:24:21.094342 kernel: kvm_amd: LBR virtualization supported Nov 6 00:24:21.094359 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 00:24:21.094376 kernel: kvm_amd: Virtual GIF supported Nov 6 00:24:21.098565 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:24:21.100112 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:24:21.101853 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:24:21.106120 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:24:21.111055 jq[1535]: false Nov 6 00:24:21.112452 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:24:21.118469 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:24:21.122149 extend-filesystems[1536]: Found /dev/vda6 Nov 6 00:24:21.132389 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing passwd entry cache Nov 6 00:24:21.124262 oslogin_cache_refresh[1537]: Refreshing passwd entry cache Nov 6 00:24:21.135200 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting users, quitting Nov 6 00:24:21.135175 oslogin_cache_refresh[1537]: Failure getting users, quitting Nov 6 00:24:21.135328 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:24:21.135328 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing group entry cache Nov 6 00:24:21.135234 oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:24:21.135293 oslogin_cache_refresh[1537]: Refreshing group entry cache Nov 6 00:24:21.137343 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:24:21.139604 extend-filesystems[1536]: Found /dev/vda9 Nov 6 00:24:21.142043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:21.144669 extend-filesystems[1536]: Checking size of /dev/vda9 Nov 6 00:24:21.150960 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting groups, quitting Nov 6 00:24:21.150960 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:24:21.150428 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:24:21.147522 oslogin_cache_refresh[1537]: Failure getting groups, quitting Nov 6 00:24:21.151121 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:24:21.147541 oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:24:21.152914 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:24:21.160692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:24:21.164907 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:24:21.170836 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:24:21.173725 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:24:21.176793 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:24:21.180047 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:24:21.180724 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:24:21.181684 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:24:21.182564 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:24:21.196340 update_engine[1556]: I20251106 00:24:21.183923 1556 main.cc:92] Flatcar Update Engine starting Nov 6 00:24:21.196669 jq[1557]: true Nov 6 00:24:21.199687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:24:21.200039 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:24:21.204834 extend-filesystems[1536]: Resized partition /dev/vda9 Nov 6 00:24:21.210831 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:24:21.221243 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 6 00:24:21.223888 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:24:21.234902 tar[1563]: linux-amd64/LICENSE Nov 6 00:24:21.234902 tar[1563]: linux-amd64/helm Nov 6 00:24:21.262802 jq[1566]: true Nov 6 00:24:21.268240 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:24:21.402926 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 6 00:24:21.428184 dbus-daemon[1533]: [system] SELinux support is enabled Nov 6 00:24:21.428502 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:24:21.432207 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:24:21.432545 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:24:21.432584 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:24:21.432721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:24:21.432764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:24:21.434291 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:24:21.434808 systemd-logind[1551]: New seat seat0. Nov 6 00:24:21.437074 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:24:21.440973 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:24:21.440973 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:24:21.440973 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 6 00:24:21.442041 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Nov 6 00:24:21.444684 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:24:21.446009 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:24:21.448712 bash[1599]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:24:21.452648 update_engine[1556]: I20251106 00:24:21.451445 1556 update_check_scheduler.cc:74] Next update check in 3m57s Nov 6 00:24:21.466931 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:24:21.472210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:21.476999 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:24:21.479750 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:24:21.484489 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:24:21.523532 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:24:21.569276 containerd[1572]: time="2025-11-06T00:24:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:24:21.569677 containerd[1572]: time="2025-11-06T00:24:21.569559264Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579315701Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.868µs" Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579355385Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579380733Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579574085Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579590496Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579616244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579680425Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579694631Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579955230Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579967573Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579980628Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580450 containerd[1572]: time="2025-11-06T00:24:21.579990937Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580801 containerd[1572]: time="2025-11-06T00:24:21.580086596Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:24:21.580979 containerd[1572]: time="2025-11-06T00:24:21.580940848Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:24:21.581014 containerd[1572]: time="2025-11-06T00:24:21.580984751Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:24:21.581014 containerd[1572]: time="2025-11-06T00:24:21.580995531Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:24:21.581063 containerd[1572]: time="2025-11-06T00:24:21.581030416Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:24:21.581446 containerd[1572]: time="2025-11-06T00:24:21.581416460Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:24:21.581533 containerd[1572]: time="2025-11-06T00:24:21.581512741Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:24:21.588907 containerd[1572]: time="2025-11-06T00:24:21.588845202Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.588930742Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.588947774Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.588961009Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.588982690Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.588998369Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.589011364Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.589025139Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.589038034Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:24:21.589046 containerd[1572]: time="2025-11-06T00:24:21.589049435Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:24:21.589361 containerd[1572]: time="2025-11-06T00:24:21.589060265Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:24:21.589361 containerd[1572]: time="2025-11-06T00:24:21.589074712Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:24:21.589361 containerd[1572]: time="2025-11-06T00:24:21.589313260Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:24:21.589361 containerd[1572]: time="2025-11-06T00:24:21.589340912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:24:21.589361 containerd[1572]: time="2025-11-06T00:24:21.589359386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589373563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589387940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589400964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589413758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589425290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589439677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589460275Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:24:21.589478 containerd[1572]: time="2025-11-06T00:24:21.589475334Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:24:21.589781 containerd[1572]: time="2025-11-06T00:24:21.589552148Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:24:21.589781 containerd[1572]: time="2025-11-06T00:24:21.589565683Z" level=info msg="Start snapshots syncer" Nov 6 00:24:21.589781 containerd[1572]: time="2025-11-06T00:24:21.589606550Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:24:21.589906 containerd[1572]: time="2025-11-06T00:24:21.589848564Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:24:21.590072 containerd[1572]: time="2025-11-06T00:24:21.589931720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:24:21.590072 containerd[1572]: time="2025-11-06T00:24:21.590006630Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:24:21.590127 containerd[1572]: time="2025-11-06T00:24:21.590114562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:24:21.590153 containerd[1572]: time="2025-11-06T00:24:21.590135161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:24:21.590189 containerd[1572]: time="2025-11-06T00:24:21.590151201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590186407Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590247402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590265666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590279171Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590305661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590319326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590333203Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590369170Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590388406Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590399527Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590409946Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590418172Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:24:21.590215 containerd[1572]: time="2025-11-06T00:24:21.590429363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:24:21.590789 containerd[1572]: time="2025-11-06T00:24:21.590440814Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:24:21.590789 containerd[1572]: time="2025-11-06T00:24:21.590458107Z" level=info msg="runtime interface created" Nov 6 00:24:21.590789 containerd[1572]: time="2025-11-06T00:24:21.590465390Z" level=info msg="created NRI interface" Nov 6 00:24:21.590789 containerd[1572]: time="2025-11-06T00:24:21.590486871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:24:21.590789 containerd[1572]: time="2025-11-06T00:24:21.590499334Z" level=info msg="Connect containerd service" Nov 6 00:24:21.590789 containerd[1572]: time="2025-11-06T00:24:21.590521415Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:24:21.591395 containerd[1572]: time="2025-11-06T00:24:21.591368714Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:24:21.596763 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:24:21.625897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:24:21.630940 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:24:21.655982 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:24:21.656420 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:24:21.661851 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:24:21.688847 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:24:21.695196 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:24:21.699232 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:24:21.701564 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:24:21.704355 containerd[1572]: time="2025-11-06T00:24:21.704295362Z" level=info msg="Start subscribing containerd event" Nov 6 00:24:21.704618 containerd[1572]: time="2025-11-06T00:24:21.704539450Z" level=info msg="Start recovering state" Nov 6 00:24:21.704868 containerd[1572]: time="2025-11-06T00:24:21.704845574Z" level=info msg="Start event monitor" Nov 6 00:24:21.706135 containerd[1572]: time="2025-11-06T00:24:21.706114544Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:24:21.706265 containerd[1572]: time="2025-11-06T00:24:21.706246341Z" level=info msg="Start streaming server" Nov 6 00:24:21.706350 containerd[1572]: time="2025-11-06T00:24:21.706333685Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:24:21.706416 containerd[1572]: time="2025-11-06T00:24:21.706402173Z" level=info msg="runtime interface starting up..." Nov 6 00:24:21.706491 containerd[1572]: time="2025-11-06T00:24:21.706475621Z" level=info msg="starting plugins..." Nov 6 00:24:21.706567 containerd[1572]: time="2025-11-06T00:24:21.706552114Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:24:21.706953 containerd[1572]: time="2025-11-06T00:24:21.706564758Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:24:21.706953 containerd[1572]: time="2025-11-06T00:24:21.706828984Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:24:21.706953 containerd[1572]: time="2025-11-06T00:24:21.706946063Z" level=info msg="containerd successfully booted in 0.138585s" Nov 6 00:24:21.707027 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:24:21.894317 tar[1563]: linux-amd64/README.md Nov 6 00:24:21.922716 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:24:22.101628 systemd-networkd[1495]: eth0: Gained IPv6LL Nov 6 00:24:22.105638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:24:22.108669 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:24:22.113051 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 00:24:22.117427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:22.121356 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:24:22.263558 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:24:22.266672 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:24:22.267025 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 00:24:22.270852 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:24:23.510694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:24:23.514399 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Nov 6 00:24:23.610897 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:23.614337 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:23.632176 systemd-logind[1551]: New session 1 of user core. Nov 6 00:24:23.632927 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:24:23.637092 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:24:23.673317 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:24:23.678970 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:24:23.700642 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:24:23.703846 systemd-logind[1551]: New session c1 of user core. Nov 6 00:24:23.871853 systemd[1677]: Queued start job for default target default.target. Nov 6 00:24:23.908184 systemd[1677]: Created slice app.slice - User Application Slice. Nov 6 00:24:23.908246 systemd[1677]: Reached target paths.target - Paths. Nov 6 00:24:23.908314 systemd[1677]: Reached target timers.target - Timers. Nov 6 00:24:23.910330 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:24:23.926803 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:24:23.926992 systemd[1677]: Reached target sockets.target - Sockets. Nov 6 00:24:23.927054 systemd[1677]: Reached target basic.target - Basic System. Nov 6 00:24:23.927133 systemd[1677]: Reached target default.target - Main User Target. Nov 6 00:24:23.927191 systemd[1677]: Startup finished in 214ms. Nov 6 00:24:23.927702 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:24:23.931581 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:24:24.001741 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:33800.service - OpenSSH per-connection server daemon (10.0.0.1:33800). Nov 6 00:24:24.134294 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 33800 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:24.135779 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:24.140571 systemd-logind[1551]: New session 2 of user core. Nov 6 00:24:24.152084 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:24:24.164435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:24.167592 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:24:24.170992 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:24:24.172301 systemd[1]: Startup finished in 4.719s (kernel) + 8.973s (initrd) + 6.505s (userspace) = 20.199s. Nov 6 00:24:24.239590 sshd[1695]: Connection closed by 10.0.0.1 port 33800 Nov 6 00:24:24.242161 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:24.315003 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:33800.service: Deactivated successfully. Nov 6 00:24:24.317442 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:24:24.318628 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:24:24.321963 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:33810.service - OpenSSH per-connection server daemon (10.0.0.1:33810). Nov 6 00:24:24.322889 systemd-logind[1551]: Removed session 2. Nov 6 00:24:24.389204 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 33810 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:24.391414 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:24.399071 systemd-logind[1551]: New session 3 of user core. Nov 6 00:24:24.410526 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:24:24.467551 sshd[1717]: Connection closed by 10.0.0.1 port 33810 Nov 6 00:24:24.468065 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:24.479659 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:33810.service: Deactivated successfully. Nov 6 00:24:24.482968 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:24:24.484847 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:24:24.487548 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:33824.service - OpenSSH per-connection server daemon (10.0.0.1:33824). Nov 6 00:24:24.489257 systemd-logind[1551]: Removed session 3. Nov 6 00:24:24.653895 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 33824 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:24.656757 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:24.663717 systemd-logind[1551]: New session 4 of user core. Nov 6 00:24:24.673428 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:24:24.772257 sshd[1726]: Connection closed by 10.0.0.1 port 33824 Nov 6 00:24:24.773656 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:24.781752 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:33824.service: Deactivated successfully. Nov 6 00:24:24.783876 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:24:24.784820 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:24:24.788686 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:33830.service - OpenSSH per-connection server daemon (10.0.0.1:33830). Nov 6 00:24:24.789470 systemd-logind[1551]: Removed session 4. Nov 6 00:24:24.920869 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 33830 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:24.923599 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:24.929967 systemd-logind[1551]: New session 5 of user core. Nov 6 00:24:24.944529 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:24:25.078421 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:24:25.078751 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:24:25.095744 sudo[1737]: pam_unix(sudo:session): session closed for user root Nov 6 00:24:25.098891 sshd[1736]: Connection closed by 10.0.0.1 port 33830 Nov 6 00:24:25.100431 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:25.111725 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:33830.service: Deactivated successfully. Nov 6 00:24:25.114737 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:24:25.116008 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:24:25.120963 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). Nov 6 00:24:25.121900 systemd-logind[1551]: Removed session 5. Nov 6 00:24:25.209074 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:25.211967 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:25.220636 systemd-logind[1551]: New session 6 of user core. Nov 6 00:24:25.232903 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:24:25.233641 kubelet[1697]: E1106 00:24:25.233591 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:24:25.240439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:24:25.240662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:24:25.241157 systemd[1]: kubelet.service: Consumed 2.769s CPU time, 258.4M memory peak. Nov 6 00:24:25.299282 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:24:25.299790 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:24:25.385634 sudo[1750]: pam_unix(sudo:session): session closed for user root Nov 6 00:24:25.394668 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:24:25.395086 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:24:25.443460 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:24:25.500143 augenrules[1772]: No rules Nov 6 00:24:25.502086 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:24:25.502409 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:24:25.503934 sudo[1749]: pam_unix(sudo:session): session closed for user root Nov 6 00:24:25.506043 sshd[1747]: Connection closed by 10.0.0.1 port 33840 Nov 6 00:24:25.506560 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:25.517975 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:33840.service: Deactivated successfully. Nov 6 00:24:25.520888 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:24:25.522031 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:24:25.525719 systemd-logind[1551]: Removed session 6. Nov 6 00:24:25.527762 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:33848.service - OpenSSH per-connection server daemon (10.0.0.1:33848). Nov 6 00:24:25.598394 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 33848 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:24:25.600526 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:25.607006 systemd-logind[1551]: New session 7 of user core. Nov 6 00:24:25.616568 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:24:25.678108 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:24:25.678531 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:24:26.798639 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:24:26.819868 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:24:27.700936 dockerd[1806]: time="2025-11-06T00:24:27.700830636Z" level=info msg="Starting up" Nov 6 00:24:27.702934 dockerd[1806]: time="2025-11-06T00:24:27.702872425Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:24:27.720972 dockerd[1806]: time="2025-11-06T00:24:27.720908690Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:24:28.690970 dockerd[1806]: time="2025-11-06T00:24:28.690878339Z" level=info msg="Loading containers: start." Nov 6 00:24:28.795257 kernel: Initializing XFRM netlink socket Nov 6 00:24:29.125370 systemd-networkd[1495]: docker0: Link UP Nov 6 00:24:29.240525 dockerd[1806]: time="2025-11-06T00:24:29.240091370Z" level=info msg="Loading containers: done." Nov 6 00:24:29.263325 dockerd[1806]: time="2025-11-06T00:24:29.263184357Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:24:29.263525 dockerd[1806]: time="2025-11-06T00:24:29.263364034Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:24:29.263555 dockerd[1806]: time="2025-11-06T00:24:29.263526369Z" level=info msg="Initializing buildkit" Nov 6 00:24:29.322161 dockerd[1806]: time="2025-11-06T00:24:29.322103291Z" level=info msg="Completed buildkit initialization" Nov 6 00:24:29.329576 dockerd[1806]: time="2025-11-06T00:24:29.329485395Z" level=info msg="Daemon has completed initialization" Nov 6 00:24:29.329722 dockerd[1806]: time="2025-11-06T00:24:29.329623364Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:24:29.329824 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:24:30.168906 containerd[1572]: time="2025-11-06T00:24:30.168855995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 00:24:30.932488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150163075.mount: Deactivated successfully. Nov 6 00:24:32.196736 containerd[1572]: time="2025-11-06T00:24:32.196669098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:32.197482 containerd[1572]: time="2025-11-06T00:24:32.197419536Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 6 00:24:32.198614 containerd[1572]: time="2025-11-06T00:24:32.198588708Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:32.201952 containerd[1572]: time="2025-11-06T00:24:32.201909476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:32.203232 containerd[1572]: time="2025-11-06T00:24:32.203179838Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.034285141s" Nov 6 00:24:32.203295 containerd[1572]: time="2025-11-06T00:24:32.203235302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 00:24:32.204146 containerd[1572]: time="2025-11-06T00:24:32.204107398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 00:24:33.300803 containerd[1572]: time="2025-11-06T00:24:33.300734315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:33.301713 containerd[1572]: time="2025-11-06T00:24:33.301652767Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 6 00:24:33.302947 containerd[1572]: time="2025-11-06T00:24:33.302902511Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:33.305666 containerd[1572]: time="2025-11-06T00:24:33.305614166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:33.306593 containerd[1572]: time="2025-11-06T00:24:33.306540123Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.102405895s" Nov 6 00:24:33.306650 containerd[1572]: time="2025-11-06T00:24:33.306617518Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 00:24:33.307203 containerd[1572]: time="2025-11-06T00:24:33.307158522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 00:24:34.532277 containerd[1572]: time="2025-11-06T00:24:34.531568769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:34.534981 containerd[1572]: time="2025-11-06T00:24:34.534931114Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 6 00:24:34.537503 containerd[1572]: time="2025-11-06T00:24:34.537455659Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:34.540828 containerd[1572]: time="2025-11-06T00:24:34.540772359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:34.541793 containerd[1572]: time="2025-11-06T00:24:34.541739823Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.234549241s" Nov 6 00:24:34.541793 containerd[1572]: time="2025-11-06T00:24:34.541780199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 00:24:34.542359 containerd[1572]: time="2025-11-06T00:24:34.542327004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 00:24:35.491430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:24:35.493797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:36.100685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:36.123792 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:24:36.348083 kubelet[2098]: E1106 00:24:36.348000 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:24:36.358017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:24:36.358341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:24:36.359112 systemd[1]: kubelet.service: Consumed 673ms CPU time, 109M memory peak. Nov 6 00:24:36.901104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925816492.mount: Deactivated successfully. Nov 6 00:24:37.505911 containerd[1572]: time="2025-11-06T00:24:37.505812908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:37.507181 containerd[1572]: time="2025-11-06T00:24:37.507096746Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 6 00:24:37.508640 containerd[1572]: time="2025-11-06T00:24:37.508549450Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:37.510948 containerd[1572]: time="2025-11-06T00:24:37.510894528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:37.511769 containerd[1572]: time="2025-11-06T00:24:37.511699347Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.969327469s" Nov 6 00:24:37.511769 containerd[1572]: time="2025-11-06T00:24:37.511754481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 00:24:37.513189 containerd[1572]: time="2025-11-06T00:24:37.513163393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 00:24:38.232256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018345800.mount: Deactivated successfully. Nov 6 00:24:40.361072 containerd[1572]: time="2025-11-06T00:24:40.360942084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:40.364446 containerd[1572]: time="2025-11-06T00:24:40.364393357Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 6 00:24:40.368354 containerd[1572]: time="2025-11-06T00:24:40.368259177Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:40.375005 containerd[1572]: time="2025-11-06T00:24:40.374906332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:40.377325 containerd[1572]: time="2025-11-06T00:24:40.377238516Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.864017465s" Nov 6 00:24:40.377325 containerd[1572]: time="2025-11-06T00:24:40.377296735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 00:24:40.378480 containerd[1572]: time="2025-11-06T00:24:40.378326336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 00:24:41.402644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986439341.mount: Deactivated successfully. Nov 6 00:24:41.431968 containerd[1572]: time="2025-11-06T00:24:41.430726472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:41.439699 containerd[1572]: time="2025-11-06T00:24:41.432827672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 6 00:24:41.439699 containerd[1572]: time="2025-11-06T00:24:41.436539964Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:41.444418 containerd[1572]: time="2025-11-06T00:24:41.444115651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:41.448742 containerd[1572]: time="2025-11-06T00:24:41.446314014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.067937524s" Nov 6 00:24:41.448742 containerd[1572]: time="2025-11-06T00:24:41.447918363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 00:24:41.448742 containerd[1572]: time="2025-11-06T00:24:41.448746205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 00:24:46.579396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:24:46.581958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:47.252515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:47.313929 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:24:47.866454 kubelet[2219]: E1106 00:24:47.866345 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:24:47.872067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:24:47.872367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:24:47.873274 systemd[1]: kubelet.service: Consumed 728ms CPU time, 110.7M memory peak. Nov 6 00:24:48.212353 containerd[1572]: time="2025-11-06T00:24:48.211128536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:48.215989 containerd[1572]: time="2025-11-06T00:24:48.213975375Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 6 00:24:48.215989 containerd[1572]: time="2025-11-06T00:24:48.215874056Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:48.223374 containerd[1572]: time="2025-11-06T00:24:48.223128200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:48.229598 containerd[1572]: time="2025-11-06T00:24:48.229512953Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 6.780727484s" Nov 6 00:24:48.229598 containerd[1572]: time="2025-11-06T00:24:48.229567716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 00:24:54.294005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:54.294276 systemd[1]: kubelet.service: Consumed 728ms CPU time, 110.7M memory peak. Nov 6 00:24:54.300695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:54.434182 systemd[1]: Reload requested from client PID 2263 ('systemctl') (unit session-7.scope)... Nov 6 00:24:54.434198 systemd[1]: Reloading... Nov 6 00:24:54.537287 zram_generator::config[2306]: No configuration found. Nov 6 00:24:55.154746 systemd[1]: Reloading finished in 720 ms. Nov 6 00:24:55.262079 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:24:55.262296 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:24:55.262756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:55.262823 systemd[1]: kubelet.service: Consumed 282ms CPU time, 98.3M memory peak. Nov 6 00:24:55.270945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:55.576173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:55.589300 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:24:56.051442 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:24:56.051442 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:24:56.051442 kubelet[2354]: I1106 00:24:56.050807 2354 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:24:56.761762 kubelet[2354]: I1106 00:24:56.761686 2354 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:24:56.761762 kubelet[2354]: I1106 00:24:56.761732 2354 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:24:56.761762 kubelet[2354]: I1106 00:24:56.761783 2354 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:24:56.762028 kubelet[2354]: I1106 00:24:56.761799 2354 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:24:56.762329 kubelet[2354]: I1106 00:24:56.762294 2354 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:24:57.227413 kubelet[2354]: E1106 00:24:57.227327 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:24:57.237392 kubelet[2354]: I1106 00:24:57.237304 2354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:24:57.249652 kubelet[2354]: I1106 00:24:57.249577 2354 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:24:57.264331 kubelet[2354]: I1106 00:24:57.263817 2354 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:24:57.264331 kubelet[2354]: I1106 00:24:57.264169 2354 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:24:57.266598 kubelet[2354]: I1106 00:24:57.264209 2354 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:24:57.266598 kubelet[2354]: I1106 00:24:57.266006 2354 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:24:57.266598 kubelet[2354]: I1106 00:24:57.266024 2354 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:24:57.266598 kubelet[2354]: I1106 00:24:57.266244 2354 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:24:57.275254 kubelet[2354]: I1106 00:24:57.275142 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:24:57.279204 kubelet[2354]: I1106 00:24:57.277357 2354 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:24:57.279204 kubelet[2354]: I1106 00:24:57.277791 2354 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:24:57.281347 kubelet[2354]: I1106 00:24:57.280394 2354 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:24:57.281347 kubelet[2354]: E1106 00:24:57.277518 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:24:57.281347 kubelet[2354]: I1106 00:24:57.280450 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:24:57.288459 kubelet[2354]: I1106 00:24:57.285391 2354 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:24:57.290021 kubelet[2354]: E1106 00:24:57.289980 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:24:57.290971 kubelet[2354]: I1106 00:24:57.290946 2354 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:24:57.291079 kubelet[2354]: I1106 00:24:57.291065 2354 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:24:57.291270 kubelet[2354]: W1106 00:24:57.291253 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:24:57.315700 kubelet[2354]: I1106 00:24:57.314386 2354 server.go:1262] "Started kubelet" Nov 6 00:24:57.315700 kubelet[2354]: I1106 00:24:57.314818 2354 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:24:57.327436 kubelet[2354]: I1106 00:24:57.327186 2354 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:24:57.327724 kubelet[2354]: I1106 00:24:57.327705 2354 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:24:57.328479 kubelet[2354]: I1106 00:24:57.328463 2354 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:24:57.340072 kubelet[2354]: I1106 00:24:57.336192 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:24:57.341746 kubelet[2354]: I1106 00:24:57.341667 2354 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:24:57.341810 kubelet[2354]: E1106 00:24:57.341762 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:24:57.343537 kubelet[2354]: I1106 00:24:57.343496 2354 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:24:57.346449 kubelet[2354]: I1106 00:24:57.345702 2354 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:24:57.351552 kubelet[2354]: I1106 00:24:57.351478 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:24:57.356910 kubelet[2354]: E1106 00:24:57.345174 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875432ec3950f01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:24:57.314299649 +0000 UTC m=+1.697578825,LastTimestamp:2025-11-06 00:24:57.314299649 +0000 UTC m=+1.697578825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:24:57.360084 kubelet[2354]: I1106 00:24:57.359657 2354 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:24:57.363729 kubelet[2354]: E1106 00:24:57.363677 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" Nov 6 00:24:57.367862 kubelet[2354]: E1106 00:24:57.366024 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:24:57.371051 kubelet[2354]: I1106 00:24:57.371004 2354 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:24:57.378336 kubelet[2354]: E1106 00:24:57.378245 2354 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:24:57.382980 kubelet[2354]: I1106 00:24:57.379379 2354 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:24:57.382980 kubelet[2354]: I1106 00:24:57.379400 2354 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:24:57.426306 kubelet[2354]: I1106 00:24:57.425473 2354 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:24:57.426306 kubelet[2354]: I1106 00:24:57.425503 2354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:24:57.426306 kubelet[2354]: I1106 00:24:57.425586 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:24:57.444496 kubelet[2354]: E1106 00:24:57.444442 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:24:57.449196 kubelet[2354]: I1106 00:24:57.449129 2354 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:24:57.456454 kubelet[2354]: I1106 00:24:57.456411 2354 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:24:57.456809 kubelet[2354]: I1106 00:24:57.456793 2354 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:24:57.457046 kubelet[2354]: I1106 00:24:57.457030 2354 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:24:57.457306 kubelet[2354]: E1106 00:24:57.457261 2354 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:24:57.458980 kubelet[2354]: E1106 00:24:57.458464 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:24:57.544883 kubelet[2354]: E1106 00:24:57.544695 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:24:57.558971 kubelet[2354]: E1106 00:24:57.558058 2354 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:24:57.565368 kubelet[2354]: E1106 00:24:57.565166 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" Nov 6 00:24:57.645186 kubelet[2354]: E1106 00:24:57.645093 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:24:57.655080 kubelet[2354]: I1106 00:24:57.654994 2354 policy_none.go:49] "None policy: Start" Nov 6 00:24:57.655080 kubelet[2354]: I1106 00:24:57.655055 2354 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:24:57.655080 kubelet[2354]: I1106 00:24:57.655091 2354 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:24:57.666352 kubelet[2354]: I1106 00:24:57.665027 2354 policy_none.go:47] "Start" Nov 6 00:24:57.687446 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:24:57.715081 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:24:57.736346 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:24:57.745612 kubelet[2354]: E1106 00:24:57.745486 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:24:57.748812 kubelet[2354]: E1106 00:24:57.748739 2354 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:24:57.749452 kubelet[2354]: I1106 00:24:57.749076 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:24:57.749452 kubelet[2354]: I1106 00:24:57.749094 2354 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:24:57.751335 kubelet[2354]: I1106 00:24:57.750980 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:24:57.753328 kubelet[2354]: E1106 00:24:57.752787 2354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:24:57.753328 kubelet[2354]: E1106 00:24:57.752973 2354 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:24:57.794658 systemd[1]: Created slice kubepods-burstable-pod5d68aa504f2adc98844ac75a4df823f2.slice - libcontainer container kubepods-burstable-pod5d68aa504f2adc98844ac75a4df823f2.slice. Nov 6 00:24:57.813724 kubelet[2354]: E1106 00:24:57.813506 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:24:57.822320 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 6 00:24:57.829280 kubelet[2354]: E1106 00:24:57.828807 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:24:57.847179 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 6 00:24:57.853865 kubelet[2354]: E1106 00:24:57.853390 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:24:57.854413 kubelet[2354]: I1106 00:24:57.854370 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:24:57.858083 kubelet[2354]: E1106 00:24:57.858027 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Nov 6 00:24:57.863386 kubelet[2354]: I1106 00:24:57.863337 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:24:57.863467 kubelet[2354]: I1106 00:24:57.863386 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d68aa504f2adc98844ac75a4df823f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d68aa504f2adc98844ac75a4df823f2\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:24:57.863467 kubelet[2354]: I1106 00:24:57.863412 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:24:57.863467 kubelet[2354]: I1106 00:24:57.863431 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:24:57.863467 kubelet[2354]: I1106 00:24:57.863454 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d68aa504f2adc98844ac75a4df823f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d68aa504f2adc98844ac75a4df823f2\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:24:57.863631 kubelet[2354]: I1106 00:24:57.863471 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d68aa504f2adc98844ac75a4df823f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d68aa504f2adc98844ac75a4df823f2\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:24:57.863631 kubelet[2354]: I1106 00:24:57.863489 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:24:57.863631 kubelet[2354]: I1106 00:24:57.863507 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:24:57.863631 kubelet[2354]: I1106 00:24:57.863524 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:24:57.965953 kubelet[2354]: E1106 00:24:57.965872 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" Nov 6 00:24:58.069251 kubelet[2354]: I1106 00:24:58.067928 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:24:58.072411 kubelet[2354]: E1106 00:24:58.069871 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Nov 6 00:24:58.128067 kubelet[2354]: E1106 00:24:58.126127 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:58.129125 containerd[1572]: time="2025-11-06T00:24:58.128509190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d68aa504f2adc98844ac75a4df823f2,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:58.178688 kubelet[2354]: E1106 00:24:58.178617 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:58.179410 containerd[1572]: time="2025-11-06T00:24:58.179342900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:58.218904 kubelet[2354]: E1106 00:24:58.218815 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:58.219656 containerd[1572]: time="2025-11-06T00:24:58.219615448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:58.433648 kubelet[2354]: E1106 00:24:58.433564 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:24:58.474617 kubelet[2354]: I1106 00:24:58.474392 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:24:58.475005 kubelet[2354]: E1106 00:24:58.474968 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Nov 6 00:24:58.578879 kubelet[2354]: E1106 00:24:58.577965 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875432ec3950f01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:24:57.314299649 +0000 UTC m=+1.697578825,LastTimestamp:2025-11-06 00:24:57.314299649 +0000 UTC m=+1.697578825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:24:58.589880 kubelet[2354]: E1106 00:24:58.588195 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:24:58.768827 kubelet[2354]: E1106 00:24:58.768583 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="1.6s" Nov 6 00:24:58.841108 kubelet[2354]: E1106 00:24:58.840125 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:24:58.972397 kubelet[2354]: E1106 00:24:58.967957 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:24:58.987347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267264711.mount: Deactivated successfully. Nov 6 00:24:59.001045 containerd[1572]: time="2025-11-06T00:24:59.000958704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:59.010441 containerd[1572]: time="2025-11-06T00:24:59.009910635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:24:59.014945 containerd[1572]: time="2025-11-06T00:24:59.014831905Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:59.027962 containerd[1572]: time="2025-11-06T00:24:59.026883931Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:59.027962 containerd[1572]: time="2025-11-06T00:24:59.027156850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:24:59.029235 containerd[1572]: time="2025-11-06T00:24:59.029149894Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:59.032489 containerd[1572]: time="2025-11-06T00:24:59.032417307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:59.034565 containerd[1572]: time="2025-11-06T00:24:59.033746805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 808.561358ms" Nov 6 00:24:59.037634 containerd[1572]: time="2025-11-06T00:24:59.037588565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:24:59.042408 containerd[1572]: time="2025-11-06T00:24:59.041844495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 824.341292ms" Nov 6 00:24:59.046508 containerd[1572]: time="2025-11-06T00:24:59.046162914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 796.508797ms" Nov 6 00:24:59.089055 containerd[1572]: time="2025-11-06T00:24:59.088954712Z" level=info msg="connecting to shim 333b70861ee033f1d2559ee5de28270e27172bcdf0f1bba67ef45748ab7f0980" address="unix:///run/containerd/s/19270d0f141557721346bd42e2ea79a759fd4fb1f6a6adf55ff029ed6522cb75" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:59.116202 containerd[1572]: time="2025-11-06T00:24:59.116100316Z" level=info msg="connecting to shim 231b2ca97107b1357db09af195d691f45433d00c7dc1dfc460cf859acd0bc02e" address="unix:///run/containerd/s/e4cfc68ded7d64b2606557548e24fd54b673bdf1aed5801138e173dae8016f94" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:59.129398 containerd[1572]: time="2025-11-06T00:24:59.129287717Z" level=info msg="connecting to shim 146f330988cc4c176eab3e2de1b3da848fa74b4904706a9585d33b1eb535eb80" address="unix:///run/containerd/s/19fb2d2e654ad36662351c7f81796655811c9a541bb2d4a2480dc5a0fa33a8d8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:59.284431 kubelet[2354]: I1106 00:24:59.277880 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:24:59.284431 kubelet[2354]: E1106 00:24:59.281913 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Nov 6 00:24:59.346643 systemd[1]: Started cri-containerd-333b70861ee033f1d2559ee5de28270e27172bcdf0f1bba67ef45748ab7f0980.scope - libcontainer container 333b70861ee033f1d2559ee5de28270e27172bcdf0f1bba67ef45748ab7f0980. Nov 6 00:24:59.362576 systemd[1]: Started cri-containerd-231b2ca97107b1357db09af195d691f45433d00c7dc1dfc460cf859acd0bc02e.scope - libcontainer container 231b2ca97107b1357db09af195d691f45433d00c7dc1dfc460cf859acd0bc02e. Nov 6 00:24:59.391925 kubelet[2354]: E1106 00:24:59.391862 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:24:59.418615 systemd[1]: Started cri-containerd-146f330988cc4c176eab3e2de1b3da848fa74b4904706a9585d33b1eb535eb80.scope - libcontainer container 146f330988cc4c176eab3e2de1b3da848fa74b4904706a9585d33b1eb535eb80. Nov 6 00:24:59.677725 containerd[1572]: time="2025-11-06T00:24:59.677251213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"333b70861ee033f1d2559ee5de28270e27172bcdf0f1bba67ef45748ab7f0980\"" Nov 6 00:24:59.678941 kubelet[2354]: E1106 00:24:59.678776 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:59.694370 containerd[1572]: time="2025-11-06T00:24:59.693115913Z" level=info msg="CreateContainer within sandbox \"333b70861ee033f1d2559ee5de28270e27172bcdf0f1bba67ef45748ab7f0980\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:24:59.737354 containerd[1572]: time="2025-11-06T00:24:59.736734628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d68aa504f2adc98844ac75a4df823f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"146f330988cc4c176eab3e2de1b3da848fa74b4904706a9585d33b1eb535eb80\"" Nov 6 00:24:59.737803 kubelet[2354]: E1106 00:24:59.737773 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:59.934041 containerd[1572]: time="2025-11-06T00:24:59.933826269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"231b2ca97107b1357db09af195d691f45433d00c7dc1dfc460cf859acd0bc02e\"" Nov 6 00:24:59.934766 kubelet[2354]: E1106 00:24:59.934734 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:59.940007 containerd[1572]: time="2025-11-06T00:24:59.939935064Z" level=info msg="CreateContainer within sandbox \"146f330988cc4c176eab3e2de1b3da848fa74b4904706a9585d33b1eb535eb80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:24:59.977768 containerd[1572]: time="2025-11-06T00:24:59.975243603Z" level=info msg="Container 7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:59.979690 containerd[1572]: time="2025-11-06T00:24:59.979592441Z" level=info msg="CreateContainer within sandbox \"231b2ca97107b1357db09af195d691f45433d00c7dc1dfc460cf859acd0bc02e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:25:00.023370 containerd[1572]: time="2025-11-06T00:25:00.023319216Z" level=info msg="Container e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:00.031631 containerd[1572]: time="2025-11-06T00:25:00.031553882Z" level=info msg="CreateContainer within sandbox \"333b70861ee033f1d2559ee5de28270e27172bcdf0f1bba67ef45748ab7f0980\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac\"" Nov 6 00:25:00.032681 containerd[1572]: time="2025-11-06T00:25:00.032647366Z" level=info msg="StartContainer for \"7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac\"" Nov 6 00:25:00.034642 containerd[1572]: time="2025-11-06T00:25:00.034611449Z" level=info msg="connecting to shim 7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac" address="unix:///run/containerd/s/19270d0f141557721346bd42e2ea79a759fd4fb1f6a6adf55ff029ed6522cb75" protocol=ttrpc version=3 Nov 6 00:25:00.038653 containerd[1572]: time="2025-11-06T00:25:00.036924608Z" level=info msg="Container a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:00.051006 containerd[1572]: time="2025-11-06T00:25:00.050943427Z" level=info msg="CreateContainer within sandbox \"146f330988cc4c176eab3e2de1b3da848fa74b4904706a9585d33b1eb535eb80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399\"" Nov 6 00:25:00.055373 containerd[1572]: time="2025-11-06T00:25:00.052542395Z" level=info msg="StartContainer for \"e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399\"" Nov 6 00:25:00.055373 containerd[1572]: time="2025-11-06T00:25:00.054201236Z" level=info msg="connecting to shim e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399" address="unix:///run/containerd/s/19fb2d2e654ad36662351c7f81796655811c9a541bb2d4a2480dc5a0fa33a8d8" protocol=ttrpc version=3 Nov 6 00:25:00.059413 containerd[1572]: time="2025-11-06T00:25:00.056619315Z" level=info msg="CreateContainer within sandbox \"231b2ca97107b1357db09af195d691f45433d00c7dc1dfc460cf859acd0bc02e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91\"" Nov 6 00:25:00.059413 containerd[1572]: time="2025-11-06T00:25:00.058988812Z" level=info msg="StartContainer for \"a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91\"" Nov 6 00:25:00.069881 containerd[1572]: time="2025-11-06T00:25:00.069702731Z" level=info msg="connecting to shim a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91" address="unix:///run/containerd/s/e4cfc68ded7d64b2606557548e24fd54b673bdf1aed5801138e173dae8016f94" protocol=ttrpc version=3 Nov 6 00:25:00.086938 systemd[1]: Started cri-containerd-7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac.scope - libcontainer container 7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac. Nov 6 00:25:00.130554 systemd[1]: Started cri-containerd-e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399.scope - libcontainer container e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399. Nov 6 00:25:00.145623 systemd[1]: Started cri-containerd-a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91.scope - libcontainer container a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91. Nov 6 00:25:00.217085 kubelet[2354]: E1106 00:25:00.216929 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:25:00.275315 containerd[1572]: time="2025-11-06T00:25:00.275197938Z" level=info msg="StartContainer for \"7b1006a0559974b1313cb4abcb1df5fab30023209b834a8a935b8481fd79a5ac\" returns successfully" Nov 6 00:25:00.301554 containerd[1572]: time="2025-11-06T00:25:00.301479220Z" level=info msg="StartContainer for \"e642044ec0afb06b1372e3b61da9fc1c75e512629a35027b12f77d5881555399\" returns successfully" Nov 6 00:25:00.307825 containerd[1572]: time="2025-11-06T00:25:00.307615445Z" level=info msg="StartContainer for \"a68444621fa1fc092caf99b8418e85a3e72d5efeed3c72b6a2527a62b3010b91\" returns successfully" Nov 6 00:25:00.369920 kubelet[2354]: E1106 00:25:00.369815 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="3.2s" Nov 6 00:25:00.484461 kubelet[2354]: E1106 00:25:00.483755 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:00.484461 kubelet[2354]: E1106 00:25:00.484017 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:00.489906 kubelet[2354]: E1106 00:25:00.489868 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:00.490098 kubelet[2354]: E1106 00:25:00.490040 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:00.490494 kubelet[2354]: E1106 00:25:00.490468 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:00.490614 kubelet[2354]: E1106 00:25:00.490581 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:00.884287 kubelet[2354]: I1106 00:25:00.884214 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:25:00.964028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710811089.mount: Deactivated successfully. Nov 6 00:25:01.495248 kubelet[2354]: E1106 00:25:01.495164 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:01.495573 kubelet[2354]: E1106 00:25:01.495514 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:01.495658 kubelet[2354]: E1106 00:25:01.495623 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:01.495922 kubelet[2354]: E1106 00:25:01.495889 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:02.495159 kubelet[2354]: E1106 00:25:02.495110 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:02.495613 kubelet[2354]: E1106 00:25:02.495318 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:02.824291 kubelet[2354]: E1106 00:25:02.824095 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:25:02.824477 kubelet[2354]: E1106 00:25:02.824359 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:03.444332 kubelet[2354]: I1106 00:25:03.439504 2354 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:25:03.444332 kubelet[2354]: E1106 00:25:03.439567 2354 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 6 00:25:03.538975 kubelet[2354]: E1106 00:25:03.538907 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:25:03.644154 kubelet[2354]: I1106 00:25:03.644107 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:03.654209 kubelet[2354]: E1106 00:25:03.654151 2354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:03.654209 kubelet[2354]: I1106 00:25:03.654186 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:25:03.655904 kubelet[2354]: E1106 00:25:03.655847 2354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:25:03.655904 kubelet[2354]: I1106 00:25:03.655879 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:03.657734 kubelet[2354]: E1106 00:25:03.657690 2354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:04.283017 kubelet[2354]: I1106 00:25:04.282886 2354 apiserver.go:52] "Watching apiserver" Nov 6 00:25:04.344485 kubelet[2354]: I1106 00:25:04.344419 2354 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:25:06.624891 kubelet[2354]: I1106 00:25:06.624823 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:06.690259 kubelet[2354]: E1106 00:25:06.690159 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:06.978545 update_engine[1556]: I20251106 00:25:06.978327 1556 update_attempter.cc:509] Updating boot flags... Nov 6 00:25:07.522934 kubelet[2354]: E1106 00:25:07.522865 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:10.190199 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-7.scope)... Nov 6 00:25:10.190236 systemd[1]: Reloading... Nov 6 00:25:10.301448 zram_generator::config[2704]: No configuration found. Nov 6 00:25:10.573688 systemd[1]: Reloading finished in 382 ms. Nov 6 00:25:10.608423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:10.625911 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:25:10.626385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:10.626460 systemd[1]: kubelet.service: Consumed 2.338s CPU time, 127.4M memory peak. Nov 6 00:25:10.628977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:10.869860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:10.883862 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:25:10.949863 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:25:10.949863 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:25:10.950284 kubelet[2749]: I1106 00:25:10.949918 2749 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:25:10.956670 kubelet[2749]: I1106 00:25:10.956628 2749 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:25:10.956670 kubelet[2749]: I1106 00:25:10.956649 2749 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:25:10.956670 kubelet[2749]: I1106 00:25:10.956674 2749 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:25:10.956820 kubelet[2749]: I1106 00:25:10.956680 2749 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:25:10.956935 kubelet[2749]: I1106 00:25:10.956912 2749 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:25:10.958081 kubelet[2749]: I1106 00:25:10.958053 2749 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:25:10.961867 kubelet[2749]: I1106 00:25:10.961809 2749 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:25:10.964727 kubelet[2749]: I1106 00:25:10.964707 2749 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:25:10.970887 kubelet[2749]: I1106 00:25:10.970859 2749 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:25:10.971263 kubelet[2749]: I1106 00:25:10.971197 2749 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:25:10.971464 kubelet[2749]: I1106 00:25:10.971261 2749 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:25:10.971553 kubelet[2749]: I1106 00:25:10.971469 2749 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:25:10.971553 kubelet[2749]: I1106 00:25:10.971479 2749 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:25:10.971553 kubelet[2749]: I1106 00:25:10.971503 2749 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:25:10.972140 kubelet[2749]: I1106 00:25:10.972115 2749 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:25:10.972339 kubelet[2749]: I1106 00:25:10.972319 2749 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:25:10.972339 kubelet[2749]: I1106 00:25:10.972334 2749 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:25:10.972396 kubelet[2749]: I1106 00:25:10.972356 2749 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:25:10.972396 kubelet[2749]: I1106 00:25:10.972372 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:25:10.973898 kubelet[2749]: I1106 00:25:10.973386 2749 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:25:10.974448 kubelet[2749]: I1106 00:25:10.974411 2749 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:25:10.974501 kubelet[2749]: I1106 00:25:10.974463 2749 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:25:10.978247 kubelet[2749]: I1106 00:25:10.977992 2749 server.go:1262] "Started kubelet" Nov 6 00:25:10.978588 kubelet[2749]: I1106 00:25:10.978554 2749 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:25:10.978626 kubelet[2749]: I1106 00:25:10.978603 2749 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:25:10.978872 kubelet[2749]: I1106 00:25:10.978850 2749 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:25:10.980248 kubelet[2749]: I1106 00:25:10.978930 2749 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:25:10.980248 kubelet[2749]: I1106 00:25:10.980118 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:25:10.981905 kubelet[2749]: I1106 00:25:10.981875 2749 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:25:10.988077 kubelet[2749]: I1106 00:25:10.988027 2749 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:25:10.988470 kubelet[2749]: I1106 00:25:10.988437 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:25:10.989617 kubelet[2749]: I1106 00:25:10.989582 2749 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:25:10.991517 kubelet[2749]: I1106 00:25:10.991460 2749 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:25:10.994533 kubelet[2749]: I1106 00:25:10.994501 2749 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:25:10.994773 kubelet[2749]: I1106 00:25:10.994744 2749 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:25:10.995032 kubelet[2749]: I1106 00:25:10.995005 2749 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:25:11.002340 kubelet[2749]: I1106 00:25:11.002307 2749 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:25:11.012155 kubelet[2749]: I1106 00:25:11.011838 2749 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:25:11.012155 kubelet[2749]: I1106 00:25:11.011867 2749 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:25:11.012155 kubelet[2749]: I1106 00:25:11.011895 2749 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:25:11.012155 kubelet[2749]: E1106 00:25:11.011950 2749 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:25:11.037582 kubelet[2749]: I1106 00:25:11.037551 2749 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.037993 2749 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038019 2749 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038148 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038162 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038178 2749 policy_none.go:49] "None policy: Start" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038187 2749 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038196 2749 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038313 2749 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 00:25:11.039085 kubelet[2749]: I1106 00:25:11.038321 2749 policy_none.go:47] "Start" Nov 6 00:25:11.045250 kubelet[2749]: E1106 00:25:11.045179 2749 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:25:11.045494 kubelet[2749]: I1106 00:25:11.045479 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:25:11.045524 kubelet[2749]: I1106 00:25:11.045496 2749 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:25:11.045827 kubelet[2749]: I1106 00:25:11.045810 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:25:11.048401 kubelet[2749]: E1106 00:25:11.048380 2749 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:25:11.113363 kubelet[2749]: I1106 00:25:11.113315 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.113363 kubelet[2749]: I1106 00:25:11.113340 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:25:11.113717 kubelet[2749]: I1106 00:25:11.113340 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:11.159703 kubelet[2749]: I1106 00:25:11.159558 2749 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:25:11.296361 kubelet[2749]: I1106 00:25:11.296290 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d68aa504f2adc98844ac75a4df823f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d68aa504f2adc98844ac75a4df823f2\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:11.296521 kubelet[2749]: I1106 00:25:11.296458 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d68aa504f2adc98844ac75a4df823f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d68aa504f2adc98844ac75a4df823f2\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:11.296599 kubelet[2749]: I1106 00:25:11.296566 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d68aa504f2adc98844ac75a4df823f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d68aa504f2adc98844ac75a4df823f2\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:11.296626 kubelet[2749]: I1106 00:25:11.296604 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.296649 kubelet[2749]: I1106 00:25:11.296626 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.296678 kubelet[2749]: I1106 00:25:11.296651 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:25:11.296735 kubelet[2749]: I1106 00:25:11.296692 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.296806 kubelet[2749]: I1106 00:25:11.296753 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.296806 kubelet[2749]: I1106 00:25:11.296782 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.450291 kubelet[2749]: E1106 00:25:11.450097 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:11.491891 kubelet[2749]: E1106 00:25:11.491818 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:11.587783 kubelet[2749]: E1106 00:25:11.587716 2749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:25:11.588058 kubelet[2749]: E1106 00:25:11.587988 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:11.590882 kubelet[2749]: I1106 00:25:11.590826 2749 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:25:11.591018 kubelet[2749]: I1106 00:25:11.590943 2749 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:25:11.728490 sudo[2788]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 00:25:11.728850 sudo[2788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 00:25:11.974249 kubelet[2749]: I1106 00:25:11.974186 2749 apiserver.go:52] "Watching apiserver" Nov 6 00:25:11.995628 kubelet[2749]: I1106 00:25:11.995483 2749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:25:12.026902 kubelet[2749]: E1106 00:25:12.026853 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:12.027521 kubelet[2749]: I1106 00:25:12.027465 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:12.028414 kubelet[2749]: E1106 00:25:12.028373 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:12.438940 kubelet[2749]: E1106 00:25:12.438068 2749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 00:25:12.439348 kubelet[2749]: E1106 00:25:12.439324 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:12.466704 kubelet[2749]: I1106 00:25:12.466473 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.466441486 podStartE2EDuration="1.466441486s" podCreationTimestamp="2025-11-06 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:12.43610161 +0000 UTC m=+1.547248950" watchObservedRunningTime="2025-11-06 00:25:12.466441486 +0000 UTC m=+1.577588826" Nov 6 00:25:12.511819 kubelet[2749]: I1106 00:25:12.511728 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.511702245 podStartE2EDuration="1.511702245s" podCreationTimestamp="2025-11-06 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:12.467057711 +0000 UTC m=+1.578205051" watchObservedRunningTime="2025-11-06 00:25:12.511702245 +0000 UTC m=+1.622849585" Nov 6 00:25:12.530471 sudo[2788]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:13.028415 kubelet[2749]: E1106 00:25:13.028363 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:13.029113 kubelet[2749]: E1106 00:25:13.029091 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:13.750755 kubelet[2749]: E1106 00:25:13.750717 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:14.030509 kubelet[2749]: E1106 00:25:14.030028 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:14.137140 kubelet[2749]: E1106 00:25:14.137079 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:14.876811 kubelet[2749]: I1106 00:25:14.876758 2749 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:25:14.877168 containerd[1572]: time="2025-11-06T00:25:14.877132854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:25:14.877644 kubelet[2749]: I1106 00:25:14.877456 2749 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:25:14.908926 sudo[1785]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:14.911319 sshd[1784]: Connection closed by 10.0.0.1 port 33848 Nov 6 00:25:14.914472 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:14.927473 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:33848.service: Deactivated successfully. Nov 6 00:25:14.930474 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:25:14.930750 systemd[1]: session-7.scope: Consumed 9.821s CPU time, 264.9M memory peak. Nov 6 00:25:14.932305 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:25:14.933804 systemd-logind[1551]: Removed session 7. Nov 6 00:25:15.031807 kubelet[2749]: E1106 00:25:15.031746 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:16.034188 kubelet[2749]: E1106 00:25:16.034094 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:16.415900 systemd[1]: Created slice kubepods-besteffort-pod5cf777d7_af5b_4534_8e50_1a8f19bdb79c.slice - libcontainer container kubepods-besteffort-pod5cf777d7_af5b_4534_8e50_1a8f19bdb79c.slice. Nov 6 00:25:16.435240 kubelet[2749]: I1106 00:25:16.435161 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5cf777d7-af5b-4534-8e50-1a8f19bdb79c-kube-proxy\") pod \"kube-proxy-gq2nj\" (UID: \"5cf777d7-af5b-4534-8e50-1a8f19bdb79c\") " pod="kube-system/kube-proxy-gq2nj" Nov 6 00:25:16.435240 kubelet[2749]: I1106 00:25:16.435208 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cf777d7-af5b-4534-8e50-1a8f19bdb79c-xtables-lock\") pod \"kube-proxy-gq2nj\" (UID: \"5cf777d7-af5b-4534-8e50-1a8f19bdb79c\") " pod="kube-system/kube-proxy-gq2nj" Nov 6 00:25:16.435240 kubelet[2749]: I1106 00:25:16.435253 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cf777d7-af5b-4534-8e50-1a8f19bdb79c-lib-modules\") pod \"kube-proxy-gq2nj\" (UID: \"5cf777d7-af5b-4534-8e50-1a8f19bdb79c\") " pod="kube-system/kube-proxy-gq2nj" Nov 6 00:25:16.435532 kubelet[2749]: I1106 00:25:16.435274 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5n7\" (UniqueName: \"kubernetes.io/projected/5cf777d7-af5b-4534-8e50-1a8f19bdb79c-kube-api-access-6m5n7\") pod \"kube-proxy-gq2nj\" (UID: \"5cf777d7-af5b-4534-8e50-1a8f19bdb79c\") " pod="kube-system/kube-proxy-gq2nj" Nov 6 00:25:16.455870 systemd[1]: Created slice kubepods-burstable-podf9c65ade_d44b_4842_bc4f_f4ce5dc0aa80.slice - libcontainer container kubepods-burstable-podf9c65ade_d44b_4842_bc4f_f4ce5dc0aa80.slice. Nov 6 00:25:16.512265 systemd[1]: Created slice kubepods-besteffort-poda22b28ac_1f95_4e81_b225_bd777e3f9e14.slice - libcontainer container kubepods-besteffort-poda22b28ac_1f95_4e81_b225_bd777e3f9e14.slice. Nov 6 00:25:16.536134 kubelet[2749]: I1106 00:25:16.536087 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-lib-modules\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537120 kubelet[2749]: I1106 00:25:16.536177 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a22b28ac-1f95-4e81-b225-bd777e3f9e14-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-wd5fc\" (UID: \"a22b28ac-1f95-4e81-b225-bd777e3f9e14\") " pod="kube-system/cilium-operator-6f9c7c5859-wd5fc" Nov 6 00:25:16.537120 kubelet[2749]: I1106 00:25:16.536693 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-clustermesh-secrets\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537120 kubelet[2749]: I1106 00:25:16.536753 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-run\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537120 kubelet[2749]: I1106 00:25:16.536858 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8qc\" (UniqueName: \"kubernetes.io/projected/a22b28ac-1f95-4e81-b225-bd777e3f9e14-kube-api-access-cd8qc\") pod \"cilium-operator-6f9c7c5859-wd5fc\" (UID: \"a22b28ac-1f95-4e81-b225-bd777e3f9e14\") " pod="kube-system/cilium-operator-6f9c7c5859-wd5fc" Nov 6 00:25:16.537120 kubelet[2749]: I1106 00:25:16.536910 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hostproc\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537556 kubelet[2749]: I1106 00:25:16.536940 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-cgroup\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537556 kubelet[2749]: I1106 00:25:16.536961 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cni-path\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537556 kubelet[2749]: I1106 00:25:16.536987 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-config-path\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537556 kubelet[2749]: I1106 00:25:16.537011 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-etc-cni-netd\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537556 kubelet[2749]: I1106 00:25:16.537047 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-xtables-lock\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537556 kubelet[2749]: I1106 00:25:16.537091 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-net\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537776 kubelet[2749]: I1106 00:25:16.537117 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hubble-tls\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537776 kubelet[2749]: I1106 00:25:16.537140 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-kernel\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537776 kubelet[2749]: I1106 00:25:16.537199 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6k8j\" (UniqueName: \"kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-kube-api-access-n6k8j\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.537776 kubelet[2749]: I1106 00:25:16.537338 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-bpf-maps\") pod \"cilium-4s8lm\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " pod="kube-system/cilium-4s8lm" Nov 6 00:25:16.865542 kubelet[2749]: E1106 00:25:16.865467 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:16.866540 containerd[1572]: time="2025-11-06T00:25:16.866456818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gq2nj,Uid:5cf777d7-af5b-4534-8e50-1a8f19bdb79c,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:16.951963 kubelet[2749]: E1106 00:25:16.951915 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:17.038174 kubelet[2749]: E1106 00:25:17.038095 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:17.093486 kubelet[2749]: E1106 00:25:17.093435 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:17.094126 containerd[1572]: time="2025-11-06T00:25:17.094081021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4s8lm,Uid:f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:17.167403 kubelet[2749]: E1106 00:25:17.167248 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:17.167957 containerd[1572]: time="2025-11-06T00:25:17.167905282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-wd5fc,Uid:a22b28ac-1f95-4e81-b225-bd777e3f9e14,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:18.039541 kubelet[2749]: E1106 00:25:18.039503 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:18.218960 containerd[1572]: time="2025-11-06T00:25:18.218908327Z" level=info msg="connecting to shim 5601ca98a7a6f6b7fba64783858623e5c8d27ea3d76591cf3cb99379efd28ad1" address="unix:///run/containerd/s/c2965e83f3f58ded6a9cd843ba44ec4de1d4fe347d1b79895c4046f7a0b4a6b7" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:18.260480 systemd[1]: Started cri-containerd-5601ca98a7a6f6b7fba64783858623e5c8d27ea3d76591cf3cb99379efd28ad1.scope - libcontainer container 5601ca98a7a6f6b7fba64783858623e5c8d27ea3d76591cf3cb99379efd28ad1. Nov 6 00:25:18.560714 containerd[1572]: time="2025-11-06T00:25:18.560579145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gq2nj,Uid:5cf777d7-af5b-4534-8e50-1a8f19bdb79c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5601ca98a7a6f6b7fba64783858623e5c8d27ea3d76591cf3cb99379efd28ad1\"" Nov 6 00:25:18.561828 kubelet[2749]: E1106 00:25:18.561786 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:18.587784 containerd[1572]: time="2025-11-06T00:25:18.587706298Z" level=info msg="CreateContainer within sandbox \"5601ca98a7a6f6b7fba64783858623e5c8d27ea3d76591cf3cb99379efd28ad1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:25:18.609187 containerd[1572]: time="2025-11-06T00:25:18.609095125Z" level=info msg="connecting to shim 8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0" address="unix:///run/containerd/s/0ec9c12f295ec3ca40964b31f8d4f58ca4d5e407a34012f3002a8c6514ff59a2" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:18.610858 containerd[1572]: time="2025-11-06T00:25:18.610779499Z" level=info msg="Container e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:18.611778 containerd[1572]: time="2025-11-06T00:25:18.611737245Z" level=info msg="connecting to shim f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff" address="unix:///run/containerd/s/b62eb605a29cc2107cb5f9fad867a804a14e45b7d0099eef0247e0a1ef2e2f7a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:18.625959 containerd[1572]: time="2025-11-06T00:25:18.625874322Z" level=info msg="CreateContainer within sandbox \"5601ca98a7a6f6b7fba64783858623e5c8d27ea3d76591cf3cb99379efd28ad1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4\"" Nov 6 00:25:18.627487 containerd[1572]: time="2025-11-06T00:25:18.627439833Z" level=info msg="StartContainer for \"e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4\"" Nov 6 00:25:18.632473 containerd[1572]: time="2025-11-06T00:25:18.632418688Z" level=info msg="connecting to shim e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4" address="unix:///run/containerd/s/c2965e83f3f58ded6a9cd843ba44ec4de1d4fe347d1b79895c4046f7a0b4a6b7" protocol=ttrpc version=3 Nov 6 00:25:18.636429 systemd[1]: Started cri-containerd-f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff.scope - libcontainer container f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff. Nov 6 00:25:18.640798 systemd[1]: Started cri-containerd-8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0.scope - libcontainer container 8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0. Nov 6 00:25:18.668452 systemd[1]: Started cri-containerd-e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4.scope - libcontainer container e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4. Nov 6 00:25:18.683508 containerd[1572]: time="2025-11-06T00:25:18.683468534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4s8lm,Uid:f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\"" Nov 6 00:25:18.685428 kubelet[2749]: E1106 00:25:18.685400 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:18.689470 containerd[1572]: time="2025-11-06T00:25:18.689401608Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 00:25:18.966250 containerd[1572]: time="2025-11-06T00:25:18.965602218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-wd5fc,Uid:a22b28ac-1f95-4e81-b225-bd777e3f9e14,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\"" Nov 6 00:25:18.967472 kubelet[2749]: E1106 00:25:18.966819 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:18.968708 containerd[1572]: time="2025-11-06T00:25:18.968665993Z" level=info msg="StartContainer for \"e48989225e55d6b8ce95922f60a93b7152f996fa0235ea3a34f235eb8ef63ff4\" returns successfully" Nov 6 00:25:19.044158 kubelet[2749]: E1106 00:25:19.044098 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:19.128952 kubelet[2749]: I1106 00:25:19.128869 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gq2nj" podStartSLOduration=3.128848611 podStartE2EDuration="3.128848611s" podCreationTimestamp="2025-11-06 00:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:19.12832818 +0000 UTC m=+8.239475520" watchObservedRunningTime="2025-11-06 00:25:19.128848611 +0000 UTC m=+8.239995951" Nov 6 00:25:28.484879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813089210.mount: Deactivated successfully. Nov 6 00:25:33.874555 containerd[1572]: time="2025-11-06T00:25:33.874431096Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:33.875943 containerd[1572]: time="2025-11-06T00:25:33.875897131Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 00:25:33.880256 containerd[1572]: time="2025-11-06T00:25:33.880133862Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:33.882126 containerd[1572]: time="2025-11-06T00:25:33.882055001Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.192483494s" Nov 6 00:25:33.882126 containerd[1572]: time="2025-11-06T00:25:33.882105646Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 00:25:33.892022 containerd[1572]: time="2025-11-06T00:25:33.888005723Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 00:25:33.922241 containerd[1572]: time="2025-11-06T00:25:33.922147465Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:25:33.979140 containerd[1572]: time="2025-11-06T00:25:33.977148232Z" level=info msg="Container 7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:34.029989 containerd[1572]: time="2025-11-06T00:25:34.029646048Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\"" Nov 6 00:25:34.041037 containerd[1572]: time="2025-11-06T00:25:34.039284736Z" level=info msg="StartContainer for \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\"" Nov 6 00:25:34.041037 containerd[1572]: time="2025-11-06T00:25:34.040511782Z" level=info msg="connecting to shim 7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c" address="unix:///run/containerd/s/b62eb605a29cc2107cb5f9fad867a804a14e45b7d0099eef0247e0a1ef2e2f7a" protocol=ttrpc version=3 Nov 6 00:25:34.113577 systemd[1]: Started cri-containerd-7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c.scope - libcontainer container 7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c. Nov 6 00:25:34.226347 containerd[1572]: time="2025-11-06T00:25:34.222704680Z" level=info msg="StartContainer for \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" returns successfully" Nov 6 00:25:34.250476 systemd[1]: cri-containerd-7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c.scope: Deactivated successfully. Nov 6 00:25:34.254687 containerd[1572]: time="2025-11-06T00:25:34.254485889Z" level=info msg="received exit event container_id:\"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" id:\"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" pid:3183 exited_at:{seconds:1762388734 nanos:253793198}" Nov 6 00:25:34.254805 containerd[1572]: time="2025-11-06T00:25:34.254711403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" id:\"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" pid:3183 exited_at:{seconds:1762388734 nanos:253793198}" Nov 6 00:25:34.329000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c-rootfs.mount: Deactivated successfully. Nov 6 00:25:34.924961 kubelet[2749]: E1106 00:25:34.924587 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:35.937499 kubelet[2749]: E1106 00:25:35.933970 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:36.955455 kubelet[2749]: E1106 00:25:36.952339 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:36.987760 containerd[1572]: time="2025-11-06T00:25:36.985933254Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:25:37.049544 containerd[1572]: time="2025-11-06T00:25:37.049482644Z" level=info msg="Container 17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:37.072209 containerd[1572]: time="2025-11-06T00:25:37.071796342Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\"" Nov 6 00:25:37.073114 containerd[1572]: time="2025-11-06T00:25:37.073056709Z" level=info msg="StartContainer for \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\"" Nov 6 00:25:37.074610 containerd[1572]: time="2025-11-06T00:25:37.074530447Z" level=info msg="connecting to shim 17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861" address="unix:///run/containerd/s/b62eb605a29cc2107cb5f9fad867a804a14e45b7d0099eef0247e0a1ef2e2f7a" protocol=ttrpc version=3 Nov 6 00:25:37.132196 systemd[1]: Started cri-containerd-17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861.scope - libcontainer container 17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861. Nov 6 00:25:37.573031 containerd[1572]: time="2025-11-06T00:25:37.572945063Z" level=info msg="StartContainer for \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" returns successfully" Nov 6 00:25:37.605645 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:25:37.606268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:25:37.607857 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:25:37.612797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:25:37.617484 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:25:37.620931 containerd[1572]: time="2025-11-06T00:25:37.618525652Z" level=info msg="received exit event container_id:\"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" id:\"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" pid:3231 exited_at:{seconds:1762388737 nanos:618189280}" Nov 6 00:25:37.620931 containerd[1572]: time="2025-11-06T00:25:37.618887151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" id:\"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" pid:3231 exited_at:{seconds:1762388737 nanos:618189280}" Nov 6 00:25:37.618126 systemd[1]: cri-containerd-17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861.scope: Deactivated successfully. Nov 6 00:25:37.682269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:25:38.044317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861-rootfs.mount: Deactivated successfully. Nov 6 00:25:38.221677 kubelet[2749]: E1106 00:25:38.221607 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:38.449017 containerd[1572]: time="2025-11-06T00:25:38.448949907Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:25:39.247375 containerd[1572]: time="2025-11-06T00:25:39.247301474Z" level=info msg="Container 528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:39.276390 containerd[1572]: time="2025-11-06T00:25:39.276321248Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\"" Nov 6 00:25:39.277200 containerd[1572]: time="2025-11-06T00:25:39.277016343Z" level=info msg="StartContainer for \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\"" Nov 6 00:25:39.278607 containerd[1572]: time="2025-11-06T00:25:39.278573608Z" level=info msg="connecting to shim 528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca" address="unix:///run/containerd/s/b62eb605a29cc2107cb5f9fad867a804a14e45b7d0099eef0247e0a1ef2e2f7a" protocol=ttrpc version=3 Nov 6 00:25:39.300448 systemd[1]: Started cri-containerd-528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca.scope - libcontainer container 528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca. Nov 6 00:25:39.361710 systemd[1]: cri-containerd-528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca.scope: Deactivated successfully. Nov 6 00:25:39.364547 containerd[1572]: time="2025-11-06T00:25:39.364514249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" id:\"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" pid:3276 exited_at:{seconds:1762388739 nanos:364162047}" Nov 6 00:25:39.367420 containerd[1572]: time="2025-11-06T00:25:39.367365052Z" level=info msg="received exit event container_id:\"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" id:\"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" pid:3276 exited_at:{seconds:1762388739 nanos:364162047}" Nov 6 00:25:39.382886 containerd[1572]: time="2025-11-06T00:25:39.382739372Z" level=info msg="StartContainer for \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" returns successfully" Nov 6 00:25:39.385015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824199620.mount: Deactivated successfully. Nov 6 00:25:39.771332 containerd[1572]: time="2025-11-06T00:25:39.771209371Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:39.772245 containerd[1572]: time="2025-11-06T00:25:39.772138487Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 00:25:39.773453 containerd[1572]: time="2025-11-06T00:25:39.773391420Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:39.774410 containerd[1572]: time="2025-11-06T00:25:39.774380066Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.886323568s" Nov 6 00:25:39.774410 containerd[1572]: time="2025-11-06T00:25:39.774409691Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 00:25:39.780316 containerd[1572]: time="2025-11-06T00:25:39.780270928Z" level=info msg="CreateContainer within sandbox \"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 00:25:39.788062 containerd[1572]: time="2025-11-06T00:25:39.787994872Z" level=info msg="Container 73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:39.797149 containerd[1572]: time="2025-11-06T00:25:39.797099159Z" level=info msg="CreateContainer within sandbox \"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\"" Nov 6 00:25:39.797807 containerd[1572]: time="2025-11-06T00:25:39.797774967Z" level=info msg="StartContainer for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\"" Nov 6 00:25:39.798897 containerd[1572]: time="2025-11-06T00:25:39.798852661Z" level=info msg="connecting to shim 73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c" address="unix:///run/containerd/s/0ec9c12f295ec3ca40964b31f8d4f58ca4d5e407a34012f3002a8c6514ff59a2" protocol=ttrpc version=3 Nov 6 00:25:39.825419 systemd[1]: Started cri-containerd-73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c.scope - libcontainer container 73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c. Nov 6 00:25:39.870670 containerd[1572]: time="2025-11-06T00:25:39.870603747Z" level=info msg="StartContainer for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" returns successfully" Nov 6 00:25:40.230177 kubelet[2749]: E1106 00:25:40.230034 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:40.240985 kubelet[2749]: E1106 00:25:40.240929 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:40.245038 kubelet[2749]: I1106 00:25:40.244784 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-wd5fc" podStartSLOduration=3.437104347 podStartE2EDuration="24.244763748s" podCreationTimestamp="2025-11-06 00:25:16 +0000 UTC" firstStartedPulling="2025-11-06 00:25:18.96795345 +0000 UTC m=+8.079100790" lastFinishedPulling="2025-11-06 00:25:39.77561285 +0000 UTC m=+28.886760191" observedRunningTime="2025-11-06 00:25:40.244344841 +0000 UTC m=+29.355492181" watchObservedRunningTime="2025-11-06 00:25:40.244763748 +0000 UTC m=+29.355911088" Nov 6 00:25:40.249596 containerd[1572]: time="2025-11-06T00:25:40.249542671Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:25:40.252341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca-rootfs.mount: Deactivated successfully. Nov 6 00:25:40.276509 containerd[1572]: time="2025-11-06T00:25:40.276446227Z" level=info msg="Container 36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:40.295250 containerd[1572]: time="2025-11-06T00:25:40.293884781Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\"" Nov 6 00:25:40.297120 containerd[1572]: time="2025-11-06T00:25:40.296374796Z" level=info msg="StartContainer for \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\"" Nov 6 00:25:40.298862 containerd[1572]: time="2025-11-06T00:25:40.298793608Z" level=info msg="connecting to shim 36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a" address="unix:///run/containerd/s/b62eb605a29cc2107cb5f9fad867a804a14e45b7d0099eef0247e0a1ef2e2f7a" protocol=ttrpc version=3 Nov 6 00:25:40.352099 systemd[1]: Started cri-containerd-36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a.scope - libcontainer container 36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a. Nov 6 00:25:40.406454 systemd[1]: cri-containerd-36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a.scope: Deactivated successfully. Nov 6 00:25:40.408024 containerd[1572]: time="2025-11-06T00:25:40.407973932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" id:\"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" pid:3367 exited_at:{seconds:1762388740 nanos:407723483}" Nov 6 00:25:40.410460 containerd[1572]: time="2025-11-06T00:25:40.410300071Z" level=info msg="received exit event container_id:\"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" id:\"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" pid:3367 exited_at:{seconds:1762388740 nanos:407723483}" Nov 6 00:25:40.423060 containerd[1572]: time="2025-11-06T00:25:40.422961835Z" level=info msg="StartContainer for \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" returns successfully" Nov 6 00:25:40.441754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a-rootfs.mount: Deactivated successfully. Nov 6 00:25:41.312448 kubelet[2749]: E1106 00:25:41.312345 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:41.313594 kubelet[2749]: E1106 00:25:41.312817 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:42.318976 kubelet[2749]: E1106 00:25:42.318906 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:42.623173 containerd[1572]: time="2025-11-06T00:25:42.623102579Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:25:43.181594 containerd[1572]: time="2025-11-06T00:25:43.181512601Z" level=info msg="Container 16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:43.307543 containerd[1572]: time="2025-11-06T00:25:43.307475296Z" level=info msg="CreateContainer within sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\"" Nov 6 00:25:43.308405 containerd[1572]: time="2025-11-06T00:25:43.308365467Z" level=info msg="StartContainer for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\"" Nov 6 00:25:43.309452 containerd[1572]: time="2025-11-06T00:25:43.309428353Z" level=info msg="connecting to shim 16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b" address="unix:///run/containerd/s/b62eb605a29cc2107cb5f9fad867a804a14e45b7d0099eef0247e0a1ef2e2f7a" protocol=ttrpc version=3 Nov 6 00:25:43.338532 systemd[1]: Started cri-containerd-16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b.scope - libcontainer container 16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b. Nov 6 00:25:43.434487 containerd[1572]: time="2025-11-06T00:25:43.434048357Z" level=info msg="StartContainer for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" returns successfully" Nov 6 00:25:43.516682 containerd[1572]: time="2025-11-06T00:25:43.515155507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" id:\"7fdd5463dae6c4e3dd618f1f0ff24e3667c3b65e8ed171115bed878bc9466563\" pid:3441 exited_at:{seconds:1762388743 nanos:513211097}" Nov 6 00:25:43.585576 kubelet[2749]: I1106 00:25:43.585511 2749 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 00:25:43.837407 systemd[1]: Created slice kubepods-burstable-pod96b5b61a_69fc_4521_a941_bfcaac21dc2c.slice - libcontainer container kubepods-burstable-pod96b5b61a_69fc_4521_a941_bfcaac21dc2c.slice. Nov 6 00:25:43.856580 systemd[1]: Created slice kubepods-burstable-pod11fc6158_f958_44e7_b193_a25bc71c37f1.slice - libcontainer container kubepods-burstable-pod11fc6158_f958_44e7_b193_a25bc71c37f1.slice. Nov 6 00:25:43.978759 kubelet[2749]: I1106 00:25:43.978691 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2vh\" (UniqueName: \"kubernetes.io/projected/96b5b61a-69fc-4521-a941-bfcaac21dc2c-kube-api-access-gq2vh\") pod \"coredns-66bc5c9577-cdt9x\" (UID: \"96b5b61a-69fc-4521-a941-bfcaac21dc2c\") " pod="kube-system/coredns-66bc5c9577-cdt9x" Nov 6 00:25:43.978759 kubelet[2749]: I1106 00:25:43.978753 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96b5b61a-69fc-4521-a941-bfcaac21dc2c-config-volume\") pod \"coredns-66bc5c9577-cdt9x\" (UID: \"96b5b61a-69fc-4521-a941-bfcaac21dc2c\") " pod="kube-system/coredns-66bc5c9577-cdt9x" Nov 6 00:25:43.978759 kubelet[2749]: I1106 00:25:43.978781 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rl6\" (UniqueName: \"kubernetes.io/projected/11fc6158-f958-44e7-b193-a25bc71c37f1-kube-api-access-p4rl6\") pod \"coredns-66bc5c9577-lkrcb\" (UID: \"11fc6158-f958-44e7-b193-a25bc71c37f1\") " pod="kube-system/coredns-66bc5c9577-lkrcb" Nov 6 00:25:43.979008 kubelet[2749]: I1106 00:25:43.978825 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11fc6158-f958-44e7-b193-a25bc71c37f1-config-volume\") pod \"coredns-66bc5c9577-lkrcb\" (UID: \"11fc6158-f958-44e7-b193-a25bc71c37f1\") " pod="kube-system/coredns-66bc5c9577-lkrcb" Nov 6 00:25:44.313979 kubelet[2749]: E1106 00:25:44.313752 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:44.322435 containerd[1572]: time="2025-11-06T00:25:44.322363542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkrcb,Uid:11fc6158-f958-44e7-b193-a25bc71c37f1,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:44.349493 kubelet[2749]: E1106 00:25:44.349448 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:44.590053 kubelet[2749]: I1106 00:25:44.589912 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4s8lm" podStartSLOduration=13.391206076 podStartE2EDuration="28.589888546s" podCreationTimestamp="2025-11-06 00:25:16 +0000 UTC" firstStartedPulling="2025-11-06 00:25:18.688042526 +0000 UTC m=+7.799189866" lastFinishedPulling="2025-11-06 00:25:33.886724996 +0000 UTC m=+22.997872336" observedRunningTime="2025-11-06 00:25:44.589393376 +0000 UTC m=+33.700540747" watchObservedRunningTime="2025-11-06 00:25:44.589888546 +0000 UTC m=+33.701035886" Nov 6 00:25:44.822003 kubelet[2749]: E1106 00:25:44.821891 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:44.824398 containerd[1572]: time="2025-11-06T00:25:44.824347610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cdt9x,Uid:96b5b61a-69fc-4521-a941-bfcaac21dc2c,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:45.351185 kubelet[2749]: E1106 00:25:45.351131 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:45.898773 systemd-networkd[1495]: cilium_host: Link UP Nov 6 00:25:45.899030 systemd-networkd[1495]: cilium_net: Link UP Nov 6 00:25:45.899353 systemd-networkd[1495]: cilium_net: Gained carrier Nov 6 00:25:45.899629 systemd-networkd[1495]: cilium_host: Gained carrier Nov 6 00:25:46.045156 systemd-networkd[1495]: cilium_vxlan: Link UP Nov 6 00:25:46.045165 systemd-networkd[1495]: cilium_vxlan: Gained carrier Nov 6 00:25:46.278331 kernel: NET: Registered PF_ALG protocol family Nov 6 00:25:46.365531 systemd-networkd[1495]: cilium_net: Gained IPv6LL Nov 6 00:25:46.709494 systemd-networkd[1495]: cilium_host: Gained IPv6LL Nov 6 00:25:47.051810 systemd-networkd[1495]: lxc_health: Link UP Nov 6 00:25:47.062860 systemd-networkd[1495]: lxc_health: Gained carrier Nov 6 00:25:47.234140 systemd-networkd[1495]: lxc140611b3b0ba: Link UP Nov 6 00:25:47.305300 kernel: eth0: renamed from tmpc2cc8 Nov 6 00:25:47.306610 systemd-networkd[1495]: lxc140611b3b0ba: Gained carrier Nov 6 00:25:47.463116 systemd-networkd[1495]: lxc199b4cdaf29b: Link UP Nov 6 00:25:47.576275 kernel: eth0: renamed from tmp49f2e Nov 6 00:25:47.579851 systemd-networkd[1495]: cilium_vxlan: Gained IPv6LL Nov 6 00:25:47.583438 systemd-networkd[1495]: lxc199b4cdaf29b: Gained carrier Nov 6 00:25:48.629444 systemd-networkd[1495]: lxc_health: Gained IPv6LL Nov 6 00:25:48.757490 systemd-networkd[1495]: lxc140611b3b0ba: Gained IPv6LL Nov 6 00:25:48.762269 kubelet[2749]: E1106 00:25:48.762197 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:48.949448 systemd-networkd[1495]: lxc199b4cdaf29b: Gained IPv6LL Nov 6 00:25:49.360651 kubelet[2749]: E1106 00:25:49.360603 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:50.364112 kubelet[2749]: E1106 00:25:50.364032 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:52.578285 containerd[1572]: time="2025-11-06T00:25:52.578200658Z" level=info msg="connecting to shim 49f2e4030172090364041c3816fb302c47e6ab20248b572965b35feaffc5f354" address="unix:///run/containerd/s/2d0c4dddcab22704231d9fe0ac14e921c401ad9bf2142d0219ec196b037798da" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:52.619515 systemd[1]: Started cri-containerd-49f2e4030172090364041c3816fb302c47e6ab20248b572965b35feaffc5f354.scope - libcontainer container 49f2e4030172090364041c3816fb302c47e6ab20248b572965b35feaffc5f354. Nov 6 00:25:52.633377 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:25:52.643541 containerd[1572]: time="2025-11-06T00:25:52.643485239Z" level=info msg="connecting to shim c2cc823653157956292eda48eb301bf69a64ceea876428130ab032ea92ea0cd0" address="unix:///run/containerd/s/4ad672065bf11486673c34b5a54e206f119382c6a4da01e8cdeb410e1f4ddfe3" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:52.669377 systemd[1]: Started cri-containerd-c2cc823653157956292eda48eb301bf69a64ceea876428130ab032ea92ea0cd0.scope - libcontainer container c2cc823653157956292eda48eb301bf69a64ceea876428130ab032ea92ea0cd0. Nov 6 00:25:52.684306 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:25:52.845432 containerd[1572]: time="2025-11-06T00:25:52.845275563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cdt9x,Uid:96b5b61a-69fc-4521-a941-bfcaac21dc2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"49f2e4030172090364041c3816fb302c47e6ab20248b572965b35feaffc5f354\"" Nov 6 00:25:52.846383 kubelet[2749]: E1106 00:25:52.846348 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:53.031213 containerd[1572]: time="2025-11-06T00:25:53.031155661Z" level=info msg="CreateContainer within sandbox \"49f2e4030172090364041c3816fb302c47e6ab20248b572965b35feaffc5f354\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:25:53.076364 containerd[1572]: time="2025-11-06T00:25:53.076301451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkrcb,Uid:11fc6158-f958-44e7-b193-a25bc71c37f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2cc823653157956292eda48eb301bf69a64ceea876428130ab032ea92ea0cd0\"" Nov 6 00:25:53.077443 kubelet[2749]: E1106 00:25:53.077415 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:53.275038 containerd[1572]: time="2025-11-06T00:25:53.274878443Z" level=info msg="CreateContainer within sandbox \"c2cc823653157956292eda48eb301bf69a64ceea876428130ab032ea92ea0cd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:25:53.632499 containerd[1572]: time="2025-11-06T00:25:53.632188063Z" level=info msg="Container bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:53.634791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980939513.mount: Deactivated successfully. Nov 6 00:25:53.681667 containerd[1572]: time="2025-11-06T00:25:53.681609186Z" level=info msg="Container 717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:53.802814 containerd[1572]: time="2025-11-06T00:25:53.802737405Z" level=info msg="CreateContainer within sandbox \"49f2e4030172090364041c3816fb302c47e6ab20248b572965b35feaffc5f354\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018\"" Nov 6 00:25:53.804098 containerd[1572]: time="2025-11-06T00:25:53.804038636Z" level=info msg="StartContainer for \"bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018\"" Nov 6 00:25:53.805449 containerd[1572]: time="2025-11-06T00:25:53.805384993Z" level=info msg="connecting to shim bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018" address="unix:///run/containerd/s/2d0c4dddcab22704231d9fe0ac14e921c401ad9bf2142d0219ec196b037798da" protocol=ttrpc version=3 Nov 6 00:25:53.835653 systemd[1]: Started cri-containerd-bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018.scope - libcontainer container bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018. Nov 6 00:25:53.850332 containerd[1572]: time="2025-11-06T00:25:53.850263251Z" level=info msg="CreateContainer within sandbox \"c2cc823653157956292eda48eb301bf69a64ceea876428130ab032ea92ea0cd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a\"" Nov 6 00:25:53.851297 containerd[1572]: time="2025-11-06T00:25:53.851269138Z" level=info msg="StartContainer for \"717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a\"" Nov 6 00:25:53.852664 containerd[1572]: time="2025-11-06T00:25:53.852602900Z" level=info msg="connecting to shim 717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a" address="unix:///run/containerd/s/4ad672065bf11486673c34b5a54e206f119382c6a4da01e8cdeb410e1f4ddfe3" protocol=ttrpc version=3 Nov 6 00:25:53.885690 systemd[1]: Started cri-containerd-717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a.scope - libcontainer container 717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a. Nov 6 00:25:53.895212 containerd[1572]: time="2025-11-06T00:25:53.895128224Z" level=info msg="StartContainer for \"bc7b1584118dea99c32cc40ec180e435640d02ff4047c9a5b841b020eeb84018\" returns successfully" Nov 6 00:25:53.936160 containerd[1572]: time="2025-11-06T00:25:53.936032635Z" level=info msg="StartContainer for \"717049ef639c2d53ecf6f6945630bb4e74d2e5ec5e00455ae2c01550b35bbc2a\" returns successfully" Nov 6 00:25:54.384541 kubelet[2749]: E1106 00:25:54.384448 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:54.386798 kubelet[2749]: E1106 00:25:54.386756 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:54.419983 kubelet[2749]: I1106 00:25:54.419911 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cdt9x" podStartSLOduration=38.419888566 podStartE2EDuration="38.419888566s" podCreationTimestamp="2025-11-06 00:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:54.405024747 +0000 UTC m=+43.516172097" watchObservedRunningTime="2025-11-06 00:25:54.419888566 +0000 UTC m=+43.531035906" Nov 6 00:25:54.469486 kubelet[2749]: I1106 00:25:54.469316 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lkrcb" podStartSLOduration=38.469288955 podStartE2EDuration="38.469288955s" podCreationTimestamp="2025-11-06 00:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:54.467738456 +0000 UTC m=+43.578885796" watchObservedRunningTime="2025-11-06 00:25:54.469288955 +0000 UTC m=+43.580436325" Nov 6 00:25:55.389523 kubelet[2749]: E1106 00:25:55.389472 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:55.390158 kubelet[2749]: E1106 00:25:55.389698 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:56.393695 kubelet[2749]: E1106 00:25:56.393566 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:56.393695 kubelet[2749]: E1106 00:25:56.393664 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.115527 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:49216.service - OpenSSH per-connection server daemon (10.0.0.1:49216). Nov 6 00:26:05.169910 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 49216 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:05.172039 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:05.177887 systemd-logind[1551]: New session 8 of user core. Nov 6 00:26:05.188625 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:26:05.352177 sshd[4080]: Connection closed by 10.0.0.1 port 49216 Nov 6 00:26:05.353369 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:05.358431 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:49216.service: Deactivated successfully. Nov 6 00:26:05.361154 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:26:05.362450 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:26:05.364415 systemd-logind[1551]: Removed session 8. Nov 6 00:26:10.377617 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:33586.service - OpenSSH per-connection server daemon (10.0.0.1:33586). Nov 6 00:26:10.434621 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 33586 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:10.436870 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:10.443399 systemd-logind[1551]: New session 9 of user core. Nov 6 00:26:10.459410 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:26:10.578655 sshd[4097]: Connection closed by 10.0.0.1 port 33586 Nov 6 00:26:10.579093 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:10.584303 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:33586.service: Deactivated successfully. Nov 6 00:26:10.586935 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:26:10.587856 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:26:10.589650 systemd-logind[1551]: Removed session 9. Nov 6 00:26:15.593732 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:33600.service - OpenSSH per-connection server daemon (10.0.0.1:33600). Nov 6 00:26:15.663807 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 33600 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:15.665782 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:15.670640 systemd-logind[1551]: New session 10 of user core. Nov 6 00:26:15.685493 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:26:15.823844 sshd[4117]: Connection closed by 10.0.0.1 port 33600 Nov 6 00:26:15.824378 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:15.830358 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:33600.service: Deactivated successfully. Nov 6 00:26:15.833169 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:26:15.835597 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:26:15.839568 systemd-logind[1551]: Removed session 10. Nov 6 00:26:20.838453 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:48390.service - OpenSSH per-connection server daemon (10.0.0.1:48390). Nov 6 00:26:21.029008 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 48390 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:21.031485 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:21.037423 systemd-logind[1551]: New session 11 of user core. Nov 6 00:26:21.045451 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:26:21.212555 sshd[4136]: Connection closed by 10.0.0.1 port 48390 Nov 6 00:26:21.212911 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:21.219183 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:48390.service: Deactivated successfully. Nov 6 00:26:21.222076 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:26:21.223094 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:26:21.224759 systemd-logind[1551]: Removed session 11. Nov 6 00:26:22.012807 kubelet[2749]: E1106 00:26:22.012745 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:26.231782 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:48448.service - OpenSSH per-connection server daemon (10.0.0.1:48448). Nov 6 00:26:26.291868 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 48448 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:26.293753 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:26.298905 systemd-logind[1551]: New session 12 of user core. Nov 6 00:26:26.316473 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:26:26.543888 sshd[4153]: Connection closed by 10.0.0.1 port 48448 Nov 6 00:26:26.544189 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:26.548503 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:48448.service: Deactivated successfully. Nov 6 00:26:26.550676 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:26:26.551743 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:26:26.553413 systemd-logind[1551]: Removed session 12. Nov 6 00:26:31.569911 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:48034.service - OpenSSH per-connection server daemon (10.0.0.1:48034). Nov 6 00:26:31.648298 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:31.648915 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:31.656156 systemd-logind[1551]: New session 13 of user core. Nov 6 00:26:31.665413 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:26:31.838672 sshd[4171]: Connection closed by 10.0.0.1 port 48034 Nov 6 00:26:31.837025 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:31.855188 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:48034.service: Deactivated successfully. Nov 6 00:26:31.858417 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:26:31.860580 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:26:31.865990 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:48050.service - OpenSSH per-connection server daemon (10.0.0.1:48050). Nov 6 00:26:31.868328 systemd-logind[1551]: Removed session 13. Nov 6 00:26:31.953939 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:31.955942 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:31.962765 systemd-logind[1551]: New session 14 of user core. Nov 6 00:26:31.973497 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:26:32.189854 sshd[4188]: Connection closed by 10.0.0.1 port 48050 Nov 6 00:26:32.192350 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:32.203321 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:48050.service: Deactivated successfully. Nov 6 00:26:32.207018 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:26:32.208989 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:26:32.215318 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:48062.service - OpenSSH per-connection server daemon (10.0.0.1:48062). Nov 6 00:26:32.216382 systemd-logind[1551]: Removed session 14. Nov 6 00:26:32.269648 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 48062 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:32.271839 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:32.277316 systemd-logind[1551]: New session 15 of user core. Nov 6 00:26:32.287483 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:26:32.414936 sshd[4203]: Connection closed by 10.0.0.1 port 48062 Nov 6 00:26:32.415368 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:32.420265 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:48062.service: Deactivated successfully. Nov 6 00:26:32.422524 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:26:32.423452 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:26:32.425165 systemd-logind[1551]: Removed session 15. Nov 6 00:26:37.438903 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:48142.service - OpenSSH per-connection server daemon (10.0.0.1:48142). Nov 6 00:26:37.506706 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 48142 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:37.508325 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:37.512928 systemd-logind[1551]: New session 16 of user core. Nov 6 00:26:37.522378 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:26:37.641409 sshd[4220]: Connection closed by 10.0.0.1 port 48142 Nov 6 00:26:37.641780 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:37.646135 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:48142.service: Deactivated successfully. Nov 6 00:26:37.648405 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:26:37.649512 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:26:37.651116 systemd-logind[1551]: Removed session 16. Nov 6 00:26:38.013250 kubelet[2749]: E1106 00:26:38.013180 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:42.016125 kubelet[2749]: E1106 00:26:42.014410 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:42.669520 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:54772.service - OpenSSH per-connection server daemon (10.0.0.1:54772). Nov 6 00:26:42.771705 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 54772 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:42.775103 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:42.784406 systemd-logind[1551]: New session 17 of user core. Nov 6 00:26:42.799602 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:26:42.930003 sshd[4236]: Connection closed by 10.0.0.1 port 54772 Nov 6 00:26:42.930612 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:42.935525 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:54772.service: Deactivated successfully. Nov 6 00:26:42.937822 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:26:42.938742 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:26:42.940068 systemd-logind[1551]: Removed session 17. Nov 6 00:26:45.013695 kubelet[2749]: E1106 00:26:45.013448 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:47.945038 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:54804.service - OpenSSH per-connection server daemon (10.0.0.1:54804). Nov 6 00:26:48.003966 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 54804 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:48.005874 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:48.011115 systemd-logind[1551]: New session 18 of user core. Nov 6 00:26:48.020415 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:26:48.136702 sshd[4252]: Connection closed by 10.0.0.1 port 54804 Nov 6 00:26:48.137099 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:48.148471 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:54804.service: Deactivated successfully. Nov 6 00:26:48.150583 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:26:48.151347 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:26:48.154178 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:54820.service - OpenSSH per-connection server daemon (10.0.0.1:54820). Nov 6 00:26:48.154880 systemd-logind[1551]: Removed session 18. Nov 6 00:26:48.205505 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 54820 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:48.207135 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:48.212203 systemd-logind[1551]: New session 19 of user core. Nov 6 00:26:48.223479 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:26:48.511810 sshd[4268]: Connection closed by 10.0.0.1 port 54820 Nov 6 00:26:48.512334 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:48.524727 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:54820.service: Deactivated successfully. Nov 6 00:26:48.527310 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:26:48.528398 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:26:48.532658 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:54834.service - OpenSSH per-connection server daemon (10.0.0.1:54834). Nov 6 00:26:48.533890 systemd-logind[1551]: Removed session 19. Nov 6 00:26:48.599893 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:48.601872 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:48.607629 systemd-logind[1551]: New session 20 of user core. Nov 6 00:26:48.613502 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:26:49.144772 sshd[4283]: Connection closed by 10.0.0.1 port 54834 Nov 6 00:26:49.145145 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:49.156039 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:54834.service: Deactivated successfully. Nov 6 00:26:49.158380 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:26:49.159541 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:26:49.162596 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). Nov 6 00:26:49.163569 systemd-logind[1551]: Removed session 20. Nov 6 00:26:49.216660 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:49.218344 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:49.223183 systemd-logind[1551]: New session 21 of user core. Nov 6 00:26:49.230411 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:26:49.747914 sshd[4308]: Connection closed by 10.0.0.1 port 54838 Nov 6 00:26:49.748474 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:49.761152 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:54838.service: Deactivated successfully. Nov 6 00:26:49.764330 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:26:49.765558 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:26:49.770055 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:54842.service - OpenSSH per-connection server daemon (10.0.0.1:54842). Nov 6 00:26:49.771250 systemd-logind[1551]: Removed session 21. Nov 6 00:26:49.821883 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 54842 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:49.824463 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:49.830954 systemd-logind[1551]: New session 22 of user core. Nov 6 00:26:49.841619 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:26:49.966970 sshd[4323]: Connection closed by 10.0.0.1 port 54842 Nov 6 00:26:49.967425 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:49.972271 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:54842.service: Deactivated successfully. Nov 6 00:26:49.974608 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:26:49.975422 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:26:49.976996 systemd-logind[1551]: Removed session 22. Nov 6 00:26:51.013719 kubelet[2749]: E1106 00:26:51.013669 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:54.982805 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:35774.service - OpenSSH per-connection server daemon (10.0.0.1:35774). Nov 6 00:26:55.075107 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 35774 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:26:55.077029 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:55.082110 systemd-logind[1551]: New session 23 of user core. Nov 6 00:26:55.098483 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:26:55.240984 sshd[4341]: Connection closed by 10.0.0.1 port 35774 Nov 6 00:26:55.241380 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:55.247679 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:35774.service: Deactivated successfully. Nov 6 00:26:55.250616 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:26:55.251711 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:26:55.253576 systemd-logind[1551]: Removed session 23. Nov 6 00:27:00.259238 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:59064.service - OpenSSH per-connection server daemon (10.0.0.1:59064). Nov 6 00:27:00.324872 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 59064 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:00.327336 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:00.332345 systemd-logind[1551]: New session 24 of user core. Nov 6 00:27:00.339407 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:27:00.458773 sshd[4360]: Connection closed by 10.0.0.1 port 59064 Nov 6 00:27:00.459171 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:00.462833 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:59064.service: Deactivated successfully. Nov 6 00:27:00.465300 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:27:00.467271 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:27:00.468881 systemd-logind[1551]: Removed session 24. Nov 6 00:27:05.478771 systemd[1]: Started sshd@24-10.0.0.90:22-10.0.0.1:59078.service - OpenSSH per-connection server daemon (10.0.0.1:59078). Nov 6 00:27:05.544358 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 59078 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:05.546172 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:05.551701 systemd-logind[1551]: New session 25 of user core. Nov 6 00:27:05.561972 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:27:05.686695 sshd[4376]: Connection closed by 10.0.0.1 port 59078 Nov 6 00:27:05.687180 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:05.693852 systemd[1]: sshd@24-10.0.0.90:22-10.0.0.1:59078.service: Deactivated successfully. Nov 6 00:27:05.696340 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:27:05.697194 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:27:05.699104 systemd-logind[1551]: Removed session 25. Nov 6 00:27:09.013791 kubelet[2749]: E1106 00:27:09.013710 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:10.704468 systemd[1]: Started sshd@25-10.0.0.90:22-10.0.0.1:42972.service - OpenSSH per-connection server daemon (10.0.0.1:42972). Nov 6 00:27:10.766445 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 42972 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:10.768106 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:10.772736 systemd-logind[1551]: New session 26 of user core. Nov 6 00:27:10.782352 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:27:10.890766 sshd[4393]: Connection closed by 10.0.0.1 port 42972 Nov 6 00:27:10.891276 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:10.905009 systemd[1]: sshd@25-10.0.0.90:22-10.0.0.1:42972.service: Deactivated successfully. Nov 6 00:27:10.907087 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:27:10.908035 systemd-logind[1551]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:27:10.910998 systemd[1]: Started sshd@26-10.0.0.90:22-10.0.0.1:42986.service - OpenSSH per-connection server daemon (10.0.0.1:42986). Nov 6 00:27:10.911955 systemd-logind[1551]: Removed session 26. Nov 6 00:27:10.959991 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 42986 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:10.961722 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:10.966075 systemd-logind[1551]: New session 27 of user core. Nov 6 00:27:10.972368 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:27:13.727372 containerd[1572]: time="2025-11-06T00:27:13.727283816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" id:\"c9ae5c8a46e8b350915122fae4a8c440b3c1457a92a43e2b2ddc0270472f8015\" pid:4436 exited_at:{seconds:1762388833 nanos:726900823}" Nov 6 00:27:13.729035 containerd[1572]: time="2025-11-06T00:27:13.728986751Z" level=info msg="StopContainer for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" with timeout 2 (s)" Nov 6 00:27:13.729502 containerd[1572]: time="2025-11-06T00:27:13.729453332Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:27:13.735030 containerd[1572]: time="2025-11-06T00:27:13.735005042Z" level=info msg="Stop container \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" with signal terminated" Nov 6 00:27:13.742800 systemd-networkd[1495]: lxc_health: Link DOWN Nov 6 00:27:13.742807 systemd-networkd[1495]: lxc_health: Lost carrier Nov 6 00:27:13.767293 systemd[1]: cri-containerd-16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b.scope: Deactivated successfully. Nov 6 00:27:13.767760 systemd[1]: cri-containerd-16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b.scope: Consumed 7.369s CPU time, 124.1M memory peak, 220K read from disk, 13.3M written to disk. Nov 6 00:27:13.768727 containerd[1572]: time="2025-11-06T00:27:13.768497098Z" level=info msg="received exit event container_id:\"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" id:\"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" pid:3406 exited_at:{seconds:1762388833 nanos:767983848}" Nov 6 00:27:13.768727 containerd[1572]: time="2025-11-06T00:27:13.768539327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" id:\"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" pid:3406 exited_at:{seconds:1762388833 nanos:767983848}" Nov 6 00:27:13.793372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b-rootfs.mount: Deactivated successfully. Nov 6 00:27:13.866279 containerd[1572]: time="2025-11-06T00:27:13.866206040Z" level=info msg="StopContainer for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" with timeout 30 (s)" Nov 6 00:27:13.867508 containerd[1572]: time="2025-11-06T00:27:13.867474334Z" level=info msg="Stop container \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" with signal terminated" Nov 6 00:27:13.879684 systemd[1]: cri-containerd-73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c.scope: Deactivated successfully. Nov 6 00:27:13.882094 containerd[1572]: time="2025-11-06T00:27:13.882058686Z" level=info msg="received exit event container_id:\"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" id:\"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" pid:3331 exited_at:{seconds:1762388833 nanos:881472439}" Nov 6 00:27:13.882353 containerd[1572]: time="2025-11-06T00:27:13.882331130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" id:\"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" pid:3331 exited_at:{seconds:1762388833 nanos:881472439}" Nov 6 00:27:13.902879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c-rootfs.mount: Deactivated successfully. Nov 6 00:27:14.013530 kubelet[2749]: E1106 00:27:14.013344 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:14.145788 containerd[1572]: time="2025-11-06T00:27:14.145704936Z" level=info msg="StopContainer for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" returns successfully" Nov 6 00:27:14.147636 containerd[1572]: time="2025-11-06T00:27:14.147595325Z" level=info msg="StopContainer for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" returns successfully" Nov 6 00:27:14.150660 containerd[1572]: time="2025-11-06T00:27:14.150604165Z" level=info msg="StopPodSandbox for \"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\"" Nov 6 00:27:14.150741 containerd[1572]: time="2025-11-06T00:27:14.150702290Z" level=info msg="Container to stop \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:27:14.155363 containerd[1572]: time="2025-11-06T00:27:14.155304837Z" level=info msg="StopPodSandbox for \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\"" Nov 6 00:27:14.155481 containerd[1572]: time="2025-11-06T00:27:14.155379779Z" level=info msg="Container to stop \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:27:14.155481 containerd[1572]: time="2025-11-06T00:27:14.155395659Z" level=info msg="Container to stop \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:27:14.155481 containerd[1572]: time="2025-11-06T00:27:14.155405818Z" level=info msg="Container to stop \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:27:14.155481 containerd[1572]: time="2025-11-06T00:27:14.155415667Z" level=info msg="Container to stop \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:27:14.155481 containerd[1572]: time="2025-11-06T00:27:14.155425235Z" level=info msg="Container to stop \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:27:14.159420 systemd[1]: cri-containerd-8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0.scope: Deactivated successfully. Nov 6 00:27:14.162581 containerd[1572]: time="2025-11-06T00:27:14.162472237Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" id:\"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" pid:2955 exit_status:137 exited_at:{seconds:1762388834 nanos:161982412}" Nov 6 00:27:14.164262 systemd[1]: cri-containerd-f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff.scope: Deactivated successfully. Nov 6 00:27:14.196104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff-rootfs.mount: Deactivated successfully. Nov 6 00:27:14.202208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0-rootfs.mount: Deactivated successfully. Nov 6 00:27:14.265937 containerd[1572]: time="2025-11-06T00:27:14.264157800Z" level=info msg="shim disconnected" id=8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0 namespace=k8s.io Nov 6 00:27:14.265937 containerd[1572]: time="2025-11-06T00:27:14.264241538Z" level=warning msg="cleaning up after shim disconnected" id=8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0 namespace=k8s.io Nov 6 00:27:14.271579 containerd[1572]: time="2025-11-06T00:27:14.264253300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:27:14.271776 containerd[1572]: time="2025-11-06T00:27:14.264178889Z" level=info msg="shim disconnected" id=f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff namespace=k8s.io Nov 6 00:27:14.271776 containerd[1572]: time="2025-11-06T00:27:14.271652958Z" level=warning msg="cleaning up after shim disconnected" id=f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff namespace=k8s.io Nov 6 00:27:14.271776 containerd[1572]: time="2025-11-06T00:27:14.271661344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:27:14.334873 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff-shm.mount: Deactivated successfully. Nov 6 00:27:14.337252 containerd[1572]: time="2025-11-06T00:27:14.336919806Z" level=info msg="received exit event sandbox_id:\"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" exit_status:137 exited_at:{seconds:1762388834 nanos:167067191}" Nov 6 00:27:14.346346 containerd[1572]: time="2025-11-06T00:27:14.345395946Z" level=info msg="TearDown network for sandbox \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" successfully" Nov 6 00:27:14.346346 containerd[1572]: time="2025-11-06T00:27:14.345465687Z" level=info msg="StopPodSandbox for \"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" returns successfully" Nov 6 00:27:14.346346 containerd[1572]: time="2025-11-06T00:27:14.345810728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" id:\"f0cbd89771b7f65496103fdb184ad0c6e8df2955813769b383e9e53c8dc747ff\" pid:2956 exit_status:137 exited_at:{seconds:1762388834 nanos:167067191}" Nov 6 00:27:14.346346 containerd[1572]: time="2025-11-06T00:27:14.346190245Z" level=info msg="received exit event sandbox_id:\"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" exit_status:137 exited_at:{seconds:1762388834 nanos:161982412}" Nov 6 00:27:14.356938 containerd[1572]: time="2025-11-06T00:27:14.355808190Z" level=info msg="TearDown network for sandbox \"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" successfully" Nov 6 00:27:14.356938 containerd[1572]: time="2025-11-06T00:27:14.355851722Z" level=info msg="StopPodSandbox for \"8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0\" returns successfully" Nov 6 00:27:14.421327 kubelet[2749]: I1106 00:27:14.420793 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-etc-cni-netd\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421327 kubelet[2749]: I1106 00:27:14.420884 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-clustermesh-secrets\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421327 kubelet[2749]: I1106 00:27:14.420916 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-xtables-lock\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421327 kubelet[2749]: I1106 00:27:14.420944 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hubble-tls\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421327 kubelet[2749]: I1106 00:27:14.420965 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-lib-modules\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421327 kubelet[2749]: I1106 00:27:14.420985 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hostproc\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421756 kubelet[2749]: I1106 00:27:14.421006 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6k8j\" (UniqueName: \"kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-kube-api-access-n6k8j\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421756 kubelet[2749]: I1106 00:27:14.421028 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-kernel\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421756 kubelet[2749]: I1106 00:27:14.421061 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-net\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421756 kubelet[2749]: I1106 00:27:14.421086 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-bpf-maps\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421756 kubelet[2749]: I1106 00:27:14.421108 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-cgroup\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421756 kubelet[2749]: I1106 00:27:14.421138 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cni-path\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421944 kubelet[2749]: I1106 00:27:14.421179 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-config-path\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421944 kubelet[2749]: I1106 00:27:14.421205 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-run\") pod \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\" (UID: \"f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80\") " Nov 6 00:27:14.421944 kubelet[2749]: I1106 00:27:14.421400 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.421944 kubelet[2749]: I1106 00:27:14.421461 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.422073 kubelet[2749]: I1106 00:27:14.421941 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.422073 kubelet[2749]: I1106 00:27:14.422028 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.423021 kubelet[2749]: I1106 00:27:14.422480 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.423021 kubelet[2749]: I1106 00:27:14.422500 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hostproc" (OuterVolumeSpecName: "hostproc") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.423021 kubelet[2749]: I1106 00:27:14.422538 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.423021 kubelet[2749]: I1106 00:27:14.422544 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.423021 kubelet[2749]: I1106 00:27:14.422567 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.423380 kubelet[2749]: I1106 00:27:14.422581 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cni-path" (OuterVolumeSpecName: "cni-path") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:27:14.426918 kubelet[2749]: I1106 00:27:14.426466 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:27:14.430855 kubelet[2749]: I1106 00:27:14.430697 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:27:14.431040 kubelet[2749]: I1106 00:27:14.430866 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:27:14.431040 kubelet[2749]: I1106 00:27:14.430948 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-kube-api-access-n6k8j" (OuterVolumeSpecName: "kube-api-access-n6k8j") pod "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" (UID: "f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80"). InnerVolumeSpecName "kube-api-access-n6k8j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522299 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd8qc\" (UniqueName: \"kubernetes.io/projected/a22b28ac-1f95-4e81-b225-bd777e3f9e14-kube-api-access-cd8qc\") pod \"a22b28ac-1f95-4e81-b225-bd777e3f9e14\" (UID: \"a22b28ac-1f95-4e81-b225-bd777e3f9e14\") " Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522367 2749 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a22b28ac-1f95-4e81-b225-bd777e3f9e14-cilium-config-path\") pod \"a22b28ac-1f95-4e81-b225-bd777e3f9e14\" (UID: \"a22b28ac-1f95-4e81-b225-bd777e3f9e14\") " Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522418 2749 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522434 2749 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522450 2749 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522463 2749 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522472 2749 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522493 kubelet[2749]: I1106 00:27:14.522483 2749 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522494 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n6k8j\" (UniqueName: \"kubernetes.io/projected/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-kube-api-access-n6k8j\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522503 2749 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522513 2749 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522523 2749 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522533 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522542 2749 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522551 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.522784 kubelet[2749]: I1106 00:27:14.522561 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.528142 kubelet[2749]: I1106 00:27:14.528077 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22b28ac-1f95-4e81-b225-bd777e3f9e14-kube-api-access-cd8qc" (OuterVolumeSpecName: "kube-api-access-cd8qc") pod "a22b28ac-1f95-4e81-b225-bd777e3f9e14" (UID: "a22b28ac-1f95-4e81-b225-bd777e3f9e14"). InnerVolumeSpecName "kube-api-access-cd8qc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:27:14.531431 kubelet[2749]: I1106 00:27:14.531370 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22b28ac-1f95-4e81-b225-bd777e3f9e14-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a22b28ac-1f95-4e81-b225-bd777e3f9e14" (UID: "a22b28ac-1f95-4e81-b225-bd777e3f9e14"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:27:14.607473 sshd[4411]: Connection closed by 10.0.0.1 port 42986 Nov 6 00:27:14.609502 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:14.624757 kubelet[2749]: I1106 00:27:14.624682 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cd8qc\" (UniqueName: \"kubernetes.io/projected/a22b28ac-1f95-4e81-b225-bd777e3f9e14-kube-api-access-cd8qc\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.624757 kubelet[2749]: I1106 00:27:14.624724 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a22b28ac-1f95-4e81-b225-bd777e3f9e14-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:27:14.626374 kubelet[2749]: I1106 00:27:14.625419 2749 scope.go:117] "RemoveContainer" containerID="73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c" Nov 6 00:27:14.629949 systemd[1]: sshd@26-10.0.0.90:22-10.0.0.1:42986.service: Deactivated successfully. Nov 6 00:27:14.633213 containerd[1572]: time="2025-11-06T00:27:14.633124924Z" level=info msg="RemoveContainer for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\"" Nov 6 00:27:14.634890 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:27:14.636483 systemd-logind[1551]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:27:14.643072 systemd[1]: Removed slice kubepods-besteffort-poda22b28ac_1f95_4e81_b225_bd777e3f9e14.slice - libcontainer container kubepods-besteffort-poda22b28ac_1f95_4e81_b225_bd777e3f9e14.slice. Nov 6 00:27:14.646468 kubelet[2749]: I1106 00:27:14.644460 2749 scope.go:117] "RemoveContainer" containerID="73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c" Nov 6 00:27:14.646522 containerd[1572]: time="2025-11-06T00:27:14.643763546Z" level=info msg="RemoveContainer for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" returns successfully" Nov 6 00:27:14.646522 containerd[1572]: time="2025-11-06T00:27:14.644856149Z" level=error msg="ContainerStatus for \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\": not found" Nov 6 00:27:14.646922 kubelet[2749]: E1106 00:27:14.646866 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\": not found" containerID="73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c" Nov 6 00:27:14.647005 kubelet[2749]: I1106 00:27:14.646916 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c"} err="failed to get container status \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"73509feebc60f1bc471f3f6511ca67d8d98e0967e77e3ad02d9e677e67dfca7c\": not found" Nov 6 00:27:14.647005 kubelet[2749]: I1106 00:27:14.646958 2749 scope.go:117] "RemoveContainer" containerID="16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b" Nov 6 00:27:14.648623 systemd[1]: Started sshd@27-10.0.0.90:22-10.0.0.1:43002.service - OpenSSH per-connection server daemon (10.0.0.1:43002). Nov 6 00:27:14.650165 systemd-logind[1551]: Removed session 27. Nov 6 00:27:14.651473 containerd[1572]: time="2025-11-06T00:27:14.651425940Z" level=info msg="RemoveContainer for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\"" Nov 6 00:27:14.653592 systemd[1]: Removed slice kubepods-burstable-podf9c65ade_d44b_4842_bc4f_f4ce5dc0aa80.slice - libcontainer container kubepods-burstable-podf9c65ade_d44b_4842_bc4f_f4ce5dc0aa80.slice. Nov 6 00:27:14.653771 systemd[1]: kubepods-burstable-podf9c65ade_d44b_4842_bc4f_f4ce5dc0aa80.slice: Consumed 7.537s CPU time, 124.4M memory peak, 236K read from disk, 13.3M written to disk. Nov 6 00:27:14.663423 containerd[1572]: time="2025-11-06T00:27:14.663265048Z" level=info msg="RemoveContainer for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" returns successfully" Nov 6 00:27:14.664709 kubelet[2749]: I1106 00:27:14.663862 2749 scope.go:117] "RemoveContainer" containerID="36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a" Nov 6 00:27:14.666969 containerd[1572]: time="2025-11-06T00:27:14.666831540Z" level=info msg="RemoveContainer for \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\"" Nov 6 00:27:14.675948 containerd[1572]: time="2025-11-06T00:27:14.675867576Z" level=info msg="RemoveContainer for \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" returns successfully" Nov 6 00:27:14.676375 kubelet[2749]: I1106 00:27:14.676308 2749 scope.go:117] "RemoveContainer" containerID="528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca" Nov 6 00:27:14.683503 containerd[1572]: time="2025-11-06T00:27:14.683455940Z" level=info msg="RemoveContainer for \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\"" Nov 6 00:27:14.691664 containerd[1572]: time="2025-11-06T00:27:14.691594943Z" level=info msg="RemoveContainer for \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" returns successfully" Nov 6 00:27:14.692028 kubelet[2749]: I1106 00:27:14.691984 2749 scope.go:117] "RemoveContainer" containerID="17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861" Nov 6 00:27:14.693886 containerd[1572]: time="2025-11-06T00:27:14.693839761Z" level=info msg="RemoveContainer for \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\"" Nov 6 00:27:14.730551 containerd[1572]: time="2025-11-06T00:27:14.730498398Z" level=info msg="RemoveContainer for \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" returns successfully" Nov 6 00:27:14.731015 kubelet[2749]: I1106 00:27:14.730919 2749 scope.go:117] "RemoveContainer" containerID="7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c" Nov 6 00:27:14.736282 containerd[1572]: time="2025-11-06T00:27:14.734562700Z" level=info msg="RemoveContainer for \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\"" Nov 6 00:27:14.741584 containerd[1572]: time="2025-11-06T00:27:14.741517718Z" level=info msg="RemoveContainer for \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" returns successfully" Nov 6 00:27:14.742031 kubelet[2749]: I1106 00:27:14.741971 2749 scope.go:117] "RemoveContainer" containerID="16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b" Nov 6 00:27:14.742582 containerd[1572]: time="2025-11-06T00:27:14.742472911Z" level=error msg="ContainerStatus for \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\": not found" Nov 6 00:27:14.742824 kubelet[2749]: E1106 00:27:14.742786 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\": not found" containerID="16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b" Nov 6 00:27:14.742970 kubelet[2749]: I1106 00:27:14.742820 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b"} err="failed to get container status \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\": rpc error: code = NotFound desc = an error occurred when try to find container \"16d46ca74b42de7358d705cb1591cdb27bf42b78d01f3d547b84930513cd127b\": not found" Nov 6 00:27:14.742970 kubelet[2749]: I1106 00:27:14.742850 2749 scope.go:117] "RemoveContainer" containerID="36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a" Nov 6 00:27:14.743168 containerd[1572]: time="2025-11-06T00:27:14.743110405Z" level=error msg="ContainerStatus for \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\": not found" Nov 6 00:27:14.743990 kubelet[2749]: E1106 00:27:14.743854 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\": not found" containerID="36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a" Nov 6 00:27:14.743990 kubelet[2749]: I1106 00:27:14.743888 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a"} err="failed to get container status \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"36153e04be304aea354e3c4f05201eb0a3e7882f5d1e932d9860011095577d6a\": not found" Nov 6 00:27:14.743990 kubelet[2749]: I1106 00:27:14.743907 2749 scope.go:117] "RemoveContainer" containerID="528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca" Nov 6 00:27:14.744141 containerd[1572]: time="2025-11-06T00:27:14.744101977Z" level=error msg="ContainerStatus for \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\": not found" Nov 6 00:27:14.744417 kubelet[2749]: E1106 00:27:14.744379 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\": not found" containerID="528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca" Nov 6 00:27:14.744471 kubelet[2749]: I1106 00:27:14.744429 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca"} err="failed to get container status \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"528c80632c5d4eabbaf56649f4f924d6709fa46cc9f5583f8012632e848843ca\": not found" Nov 6 00:27:14.744517 kubelet[2749]: I1106 00:27:14.744479 2749 scope.go:117] "RemoveContainer" containerID="17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861" Nov 6 00:27:14.744852 containerd[1572]: time="2025-11-06T00:27:14.744821305Z" level=error msg="ContainerStatus for \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\": not found" Nov 6 00:27:14.745094 kubelet[2749]: E1106 00:27:14.745065 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\": not found" containerID="17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861" Nov 6 00:27:14.745186 kubelet[2749]: I1106 00:27:14.745095 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861"} err="failed to get container status \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\": rpc error: code = NotFound desc = an error occurred when try to find container \"17faab8b8737a8c1dc229a71a5e80759a30ddd22fa8f282e8dd11e37c0bfc861\": not found" Nov 6 00:27:14.745186 kubelet[2749]: I1106 00:27:14.745114 2749 scope.go:117] "RemoveContainer" containerID="7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c" Nov 6 00:27:14.745365 containerd[1572]: time="2025-11-06T00:27:14.745329243Z" level=error msg="ContainerStatus for \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\": not found" Nov 6 00:27:14.745492 kubelet[2749]: E1106 00:27:14.745465 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\": not found" containerID="7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c" Nov 6 00:27:14.745537 kubelet[2749]: I1106 00:27:14.745493 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c"} err="failed to get container status \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a049df712d6284dc738d90c740529df08276ad7097e93c004e85e0ed59a691c\": not found" Nov 6 00:27:14.746723 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:14.748972 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:14.760441 systemd-logind[1551]: New session 28 of user core. Nov 6 00:27:14.776751 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 00:27:14.793014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a2b1c06430bbb8492b6238d1d92fe31b30402a6486a00ec61b7ca2eea672ad0-shm.mount: Deactivated successfully. Nov 6 00:27:14.793212 systemd[1]: var-lib-kubelet-pods-f9c65ade\x2dd44b\x2d4842\x2dbc4f\x2df4ce5dc0aa80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn6k8j.mount: Deactivated successfully. Nov 6 00:27:14.793377 systemd[1]: var-lib-kubelet-pods-a22b28ac\x2d1f95\x2d4e81\x2db225\x2dbd777e3f9e14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcd8qc.mount: Deactivated successfully. Nov 6 00:27:14.793498 systemd[1]: var-lib-kubelet-pods-f9c65ade\x2dd44b\x2d4842\x2dbc4f\x2df4ce5dc0aa80-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 00:27:14.793629 systemd[1]: var-lib-kubelet-pods-f9c65ade\x2dd44b\x2d4842\x2dbc4f\x2df4ce5dc0aa80-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 00:27:15.016441 kubelet[2749]: I1106 00:27:15.016319 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22b28ac-1f95-4e81-b225-bd777e3f9e14" path="/var/lib/kubelet/pods/a22b28ac-1f95-4e81-b225-bd777e3f9e14/volumes" Nov 6 00:27:15.017140 kubelet[2749]: I1106 00:27:15.017114 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80" path="/var/lib/kubelet/pods/f9c65ade-d44b-4842-bc4f-f4ce5dc0aa80/volumes" Nov 6 00:27:15.690780 sshd[4571]: Connection closed by 10.0.0.1 port 43002 Nov 6 00:27:15.692441 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:15.707315 systemd[1]: sshd@27-10.0.0.90:22-10.0.0.1:43002.service: Deactivated successfully. Nov 6 00:27:15.713038 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 00:27:15.715325 systemd-logind[1551]: Session 28 logged out. Waiting for processes to exit. Nov 6 00:27:15.720909 systemd[1]: Started sshd@28-10.0.0.90:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). Nov 6 00:27:15.721615 systemd-logind[1551]: Removed session 28. Nov 6 00:27:15.747413 systemd[1]: Created slice kubepods-burstable-pod3c4ac09c_08f7_4ed5_a4f8_d2f05d67eb9b.slice - libcontainer container kubepods-burstable-pod3c4ac09c_08f7_4ed5_a4f8_d2f05d67eb9b.slice. Nov 6 00:27:15.779924 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:15.781847 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:15.788468 systemd-logind[1551]: New session 29 of user core. Nov 6 00:27:15.799447 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 6 00:27:15.830899 kubelet[2749]: I1106 00:27:15.830811 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-cilium-config-path\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.830899 kubelet[2749]: I1106 00:27:15.830878 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdnnv\" (UniqueName: \"kubernetes.io/projected/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-kube-api-access-xdnnv\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.830899 kubelet[2749]: I1106 00:27:15.830909 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-host-proc-sys-kernel\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.831268 kubelet[2749]: I1106 00:27:15.830967 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-hostproc\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.831268 kubelet[2749]: I1106 00:27:15.830990 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-cilium-ipsec-secrets\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.831268 kubelet[2749]: I1106 00:27:15.831009 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-cilium-run\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.831268 kubelet[2749]: I1106 00:27:15.831028 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-etc-cni-netd\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.831268 kubelet[2749]: I1106 00:27:15.831050 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-lib-modules\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.831268 kubelet[2749]: I1106 00:27:15.831074 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-clustermesh-secrets\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.832047 kubelet[2749]: I1106 00:27:15.831095 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-host-proc-sys-net\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.832047 kubelet[2749]: I1106 00:27:15.831113 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-hubble-tls\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.832047 kubelet[2749]: I1106 00:27:15.831150 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-cilium-cgroup\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.832047 kubelet[2749]: I1106 00:27:15.831174 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-cni-path\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.832047 kubelet[2749]: I1106 00:27:15.831195 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-bpf-maps\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.832047 kubelet[2749]: I1106 00:27:15.831210 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b-xtables-lock\") pod \"cilium-k9pz6\" (UID: \"3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b\") " pod="kube-system/cilium-k9pz6" Nov 6 00:27:15.867068 sshd[4587]: Connection closed by 10.0.0.1 port 43008 Nov 6 00:27:15.867553 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:15.883602 systemd[1]: sshd@28-10.0.0.90:22-10.0.0.1:43008.service: Deactivated successfully. Nov 6 00:27:15.886546 systemd[1]: session-29.scope: Deactivated successfully. Nov 6 00:27:15.889088 systemd-logind[1551]: Session 29 logged out. Waiting for processes to exit. Nov 6 00:27:15.895261 systemd[1]: Started sshd@29-10.0.0.90:22-10.0.0.1:43016.service - OpenSSH per-connection server daemon (10.0.0.1:43016). Nov 6 00:27:15.896063 systemd-logind[1551]: Removed session 29. Nov 6 00:27:15.961052 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 43016 ssh2: RSA SHA256:jmpcjt0SOllQ8hz1dCOl2Df8XidkA38Tt4jnzBYFoKE Nov 6 00:27:15.962585 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:15.971525 systemd-logind[1551]: New session 30 of user core. Nov 6 00:27:15.982580 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 6 00:27:16.074916 kubelet[2749]: E1106 00:27:16.074847 2749 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:27:16.253099 kubelet[2749]: E1106 00:27:16.252895 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:16.253722 containerd[1572]: time="2025-11-06T00:27:16.253673980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9pz6,Uid:3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:16.606187 containerd[1572]: time="2025-11-06T00:27:16.606110611Z" level=info msg="connecting to shim 907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510" address="unix:///run/containerd/s/f062b17ab32a05422eb8346bf3176e939e23c49c4df69cda2a33a92e1ae2da72" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:16.654849 systemd[1]: Started cri-containerd-907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510.scope - libcontainer container 907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510. Nov 6 00:27:16.687486 containerd[1572]: time="2025-11-06T00:27:16.687406340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9pz6,Uid:3c4ac09c-08f7-4ed5-a4f8-d2f05d67eb9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\"" Nov 6 00:27:16.689352 kubelet[2749]: E1106 00:27:16.688988 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:16.704042 containerd[1572]: time="2025-11-06T00:27:16.703790161Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:27:16.715857 containerd[1572]: time="2025-11-06T00:27:16.715787314Z" level=info msg="Container 4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:16.731859 containerd[1572]: time="2025-11-06T00:27:16.731781379Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\"" Nov 6 00:27:16.733789 containerd[1572]: time="2025-11-06T00:27:16.732426086Z" level=info msg="StartContainer for \"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\"" Nov 6 00:27:16.733789 containerd[1572]: time="2025-11-06T00:27:16.733494623Z" level=info msg="connecting to shim 4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986" address="unix:///run/containerd/s/f062b17ab32a05422eb8346bf3176e939e23c49c4df69cda2a33a92e1ae2da72" protocol=ttrpc version=3 Nov 6 00:27:16.773554 systemd[1]: Started cri-containerd-4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986.scope - libcontainer container 4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986. Nov 6 00:27:16.829688 containerd[1572]: time="2025-11-06T00:27:16.829636480Z" level=info msg="StartContainer for \"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\" returns successfully" Nov 6 00:27:16.838004 systemd[1]: cri-containerd-4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986.scope: Deactivated successfully. Nov 6 00:27:16.839852 containerd[1572]: time="2025-11-06T00:27:16.839803989Z" level=info msg="received exit event container_id:\"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\" id:\"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\" pid:4668 exited_at:{seconds:1762388836 nanos:839441846}" Nov 6 00:27:16.840374 containerd[1572]: time="2025-11-06T00:27:16.840329511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\" id:\"4dc1c5edb708f48ea93258082c676cb36fe7955b3e7e18bc17d79ff9d26fa986\" pid:4668 exited_at:{seconds:1762388836 nanos:839441846}" Nov 6 00:27:17.650946 kubelet[2749]: E1106 00:27:17.650904 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:17.778359 containerd[1572]: time="2025-11-06T00:27:17.778275485Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:27:17.854782 containerd[1572]: time="2025-11-06T00:27:17.854708505Z" level=info msg="Container 67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:17.939743 containerd[1572]: time="2025-11-06T00:27:17.939588977Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\"" Nov 6 00:27:17.940605 containerd[1572]: time="2025-11-06T00:27:17.940561563Z" level=info msg="StartContainer for \"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\"" Nov 6 00:27:17.942127 containerd[1572]: time="2025-11-06T00:27:17.941731540Z" level=info msg="connecting to shim 67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a" address="unix:///run/containerd/s/f062b17ab32a05422eb8346bf3176e939e23c49c4df69cda2a33a92e1ae2da72" protocol=ttrpc version=3 Nov 6 00:27:17.971590 systemd[1]: Started cri-containerd-67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a.scope - libcontainer container 67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a. Nov 6 00:27:18.016662 containerd[1572]: time="2025-11-06T00:27:18.016608552Z" level=info msg="StartContainer for \"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\" returns successfully" Nov 6 00:27:18.016867 systemd[1]: cri-containerd-67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a.scope: Deactivated successfully. Nov 6 00:27:18.018272 containerd[1572]: time="2025-11-06T00:27:18.018239059Z" level=info msg="received exit event container_id:\"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\" id:\"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\" pid:4712 exited_at:{seconds:1762388838 nanos:17988666}" Nov 6 00:27:18.018519 containerd[1572]: time="2025-11-06T00:27:18.018318138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\" id:\"67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a\" pid:4712 exited_at:{seconds:1762388838 nanos:17988666}" Nov 6 00:27:18.045905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67747b91c6579a9640b440e9b50d4e7311f416299f751daddeb836d4b4f2357a-rootfs.mount: Deactivated successfully. Nov 6 00:27:18.655878 kubelet[2749]: E1106 00:27:18.655558 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:18.792632 containerd[1572]: time="2025-11-06T00:27:18.792504907Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:27:19.057461 containerd[1572]: time="2025-11-06T00:27:19.057296060Z" level=info msg="Container 6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:19.287417 containerd[1572]: time="2025-11-06T00:27:19.287363955Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\"" Nov 6 00:27:19.287951 containerd[1572]: time="2025-11-06T00:27:19.287919393Z" level=info msg="StartContainer for \"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\"" Nov 6 00:27:19.289645 containerd[1572]: time="2025-11-06T00:27:19.289612278Z" level=info msg="connecting to shim 6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f" address="unix:///run/containerd/s/f062b17ab32a05422eb8346bf3176e939e23c49c4df69cda2a33a92e1ae2da72" protocol=ttrpc version=3 Nov 6 00:27:19.317526 systemd[1]: Started cri-containerd-6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f.scope - libcontainer container 6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f. Nov 6 00:27:19.365608 systemd[1]: cri-containerd-6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f.scope: Deactivated successfully. Nov 6 00:27:19.368214 containerd[1572]: time="2025-11-06T00:27:19.368164985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\" id:\"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\" pid:4759 exited_at:{seconds:1762388839 nanos:367867484}" Nov 6 00:27:19.437446 containerd[1572]: time="2025-11-06T00:27:19.437332975Z" level=info msg="received exit event container_id:\"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\" id:\"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\" pid:4759 exited_at:{seconds:1762388839 nanos:367867484}" Nov 6 00:27:19.448461 containerd[1572]: time="2025-11-06T00:27:19.448415466Z" level=info msg="StartContainer for \"6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f\" returns successfully" Nov 6 00:27:19.463965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f250371751ddd1a032e06481cb1d4e181f45f1e208cbd8a1bf63003a832b19f-rootfs.mount: Deactivated successfully. Nov 6 00:27:19.661917 kubelet[2749]: E1106 00:27:19.661863 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:19.760538 containerd[1572]: time="2025-11-06T00:27:19.760484155Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:27:20.065308 containerd[1572]: time="2025-11-06T00:27:20.065166739Z" level=info msg="Container 487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:20.069477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430827105.mount: Deactivated successfully. Nov 6 00:27:20.282805 containerd[1572]: time="2025-11-06T00:27:20.282705815Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\"" Nov 6 00:27:20.283654 containerd[1572]: time="2025-11-06T00:27:20.283585304Z" level=info msg="StartContainer for \"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\"" Nov 6 00:27:20.284939 containerd[1572]: time="2025-11-06T00:27:20.284905195Z" level=info msg="connecting to shim 487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138" address="unix:///run/containerd/s/f062b17ab32a05422eb8346bf3176e939e23c49c4df69cda2a33a92e1ae2da72" protocol=ttrpc version=3 Nov 6 00:27:20.311514 systemd[1]: Started cri-containerd-487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138.scope - libcontainer container 487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138. Nov 6 00:27:20.347200 systemd[1]: cri-containerd-487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138.scope: Deactivated successfully. Nov 6 00:27:20.358049 containerd[1572]: time="2025-11-06T00:27:20.347766406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\" id:\"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\" pid:4800 exited_at:{seconds:1762388840 nanos:347392231}" Nov 6 00:27:20.410249 containerd[1572]: time="2025-11-06T00:27:20.408105119Z" level=info msg="received exit event container_id:\"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\" id:\"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\" pid:4800 exited_at:{seconds:1762388840 nanos:347392231}" Nov 6 00:27:20.418565 containerd[1572]: time="2025-11-06T00:27:20.418513307Z" level=info msg="StartContainer for \"487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138\" returns successfully" Nov 6 00:27:20.432454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-487ba5affebb3425fb99cdae2abda1df9c252f3a57d9f0780722c9ac59c34138-rootfs.mount: Deactivated successfully. Nov 6 00:27:20.668958 kubelet[2749]: E1106 00:27:20.668611 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:20.844537 containerd[1572]: time="2025-11-06T00:27:20.844482402Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:27:20.921865 containerd[1572]: time="2025-11-06T00:27:20.921687366Z" level=info msg="Container 917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:20.942431 containerd[1572]: time="2025-11-06T00:27:20.942373445Z" level=info msg="CreateContainer within sandbox \"907d0fec1ca0f0a5b2aedf4e26c6974f289a2562ccf568cf5dafd73e29922510\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\"" Nov 6 00:27:20.943236 containerd[1572]: time="2025-11-06T00:27:20.943177242Z" level=info msg="StartContainer for \"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\"" Nov 6 00:27:20.944494 containerd[1572]: time="2025-11-06T00:27:20.944442799Z" level=info msg="connecting to shim 917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b" address="unix:///run/containerd/s/f062b17ab32a05422eb8346bf3176e939e23c49c4df69cda2a33a92e1ae2da72" protocol=ttrpc version=3 Nov 6 00:27:20.977467 systemd[1]: Started cri-containerd-917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b.scope - libcontainer container 917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b. Nov 6 00:27:21.075812 kubelet[2749]: E1106 00:27:21.075757 2749 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:27:21.186361 containerd[1572]: time="2025-11-06T00:27:21.186180106Z" level=info msg="StartContainer for \"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" returns successfully" Nov 6 00:27:21.286002 containerd[1572]: time="2025-11-06T00:27:21.285952175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"a0f3339e8e339032748e46bfa8255b2c51e230450520fcd38694bc9a1d788512\" pid:4876 exited_at:{seconds:1762388841 nanos:285543675}" Nov 6 00:27:21.532330 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 6 00:27:21.680141 kubelet[2749]: E1106 00:27:21.679597 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:21.987233 kubelet[2749]: I1106 00:27:21.987029 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k9pz6" podStartSLOduration=6.987006407 podStartE2EDuration="6.987006407s" podCreationTimestamp="2025-11-06 00:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:21.986702314 +0000 UTC m=+131.097849664" watchObservedRunningTime="2025-11-06 00:27:21.987006407 +0000 UTC m=+131.098153747" Nov 6 00:27:22.013056 kubelet[2749]: E1106 00:27:22.012964 2749 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-cdt9x" podUID="96b5b61a-69fc-4521-a941-bfcaac21dc2c" Nov 6 00:27:22.681846 kubelet[2749]: E1106 00:27:22.681788 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:22.847175 containerd[1572]: time="2025-11-06T00:27:22.847120151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"b9f05bb349b8ce9ea513420b432c44265c0c8315f7a966ebefe509da1323d22d\" pid:4976 exit_status:1 exited_at:{seconds:1762388842 nanos:846808323}" Nov 6 00:27:24.012814 kubelet[2749]: E1106 00:27:24.012743 2749 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-cdt9x" podUID="96b5b61a-69fc-4521-a941-bfcaac21dc2c" Nov 6 00:27:24.973030 containerd[1572]: time="2025-11-06T00:27:24.972963534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"c85d0e6b387da11bf4f10201e30a409f91d1308902df4a0f92aa4f9b125a24b9\" pid:5285 exit_status:1 exited_at:{seconds:1762388844 nanos:972250139}" Nov 6 00:27:25.784658 systemd-networkd[1495]: lxc_health: Link UP Nov 6 00:27:25.785558 systemd-networkd[1495]: lxc_health: Gained carrier Nov 6 00:27:25.817629 kubelet[2749]: I1106 00:27:25.817122 2749 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T00:27:25Z","lastTransitionTime":"2025-11-06T00:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 00:27:26.013354 kubelet[2749]: E1106 00:27:26.013265 2749 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-cdt9x" podUID="96b5b61a-69fc-4521-a941-bfcaac21dc2c" Nov 6 00:27:26.059716 kubelet[2749]: E1106 00:27:26.056283 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:26.702275 kubelet[2749]: E1106 00:27:26.701260 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:27.445479 systemd-networkd[1495]: lxc_health: Gained IPv6LL Nov 6 00:27:27.455965 containerd[1572]: time="2025-11-06T00:27:27.455879385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"b638e9f9f8f8685e1890b322b43ec6fd7baa15fa5f9345c1ad6f0724fb743763\" pid:5462 exited_at:{seconds:1762388847 nanos:454870203}" Nov 6 00:27:27.714681 kubelet[2749]: E1106 00:27:27.714522 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.012860 kubelet[2749]: E1106 00:27:28.012717 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:29.763999 containerd[1572]: time="2025-11-06T00:27:29.761971164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"f973f551a43fabb649fb658319921ff3e678ec3f18159f70ba75b036e91fd252\" pid:5493 exited_at:{seconds:1762388849 nanos:760680851}" Nov 6 00:27:31.016040 kubelet[2749]: E1106 00:27:31.015130 2749 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:31.938080 containerd[1572]: time="2025-11-06T00:27:31.932799992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"dc0e8e4eb884f19e435ebe119a099efbcfde141df4754dc71573b200c86100b3\" pid:5526 exited_at:{seconds:1762388851 nanos:932372216}" Nov 6 00:27:34.568151 containerd[1572]: time="2025-11-06T00:27:34.563257910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917452ad75ea061562e07ab7d2745ebee8d73e6f1af67176fc4c55b0b6b2873b\" id:\"649edb8d99c120c901369a15d48b390e138c0888039414cd7d0a401db583aedd\" pid:5551 exited_at:{seconds:1762388854 nanos:561565139}" Nov 6 00:27:34.579538 sshd[4601]: Connection closed by 10.0.0.1 port 43016 Nov 6 00:27:34.587563 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:34.605572 systemd[1]: sshd@29-10.0.0.90:22-10.0.0.1:43016.service: Deactivated successfully. Nov 6 00:27:34.619094 systemd[1]: session-30.scope: Deactivated successfully. Nov 6 00:27:34.625425 systemd-logind[1551]: Session 30 logged out. Waiting for processes to exit. Nov 6 00:27:34.634894 systemd-logind[1551]: Removed session 30.